* [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-05 13:14 [PATCH 0/2] Fix migration races in rmap_walk() V5 Mel Gorman
@ 2010-05-05 13:14 ` Mel Gorman
0 siblings, 0 replies; 34+ messages in thread
From: Mel Gorman @ 2010-05-05 13:14 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux-MM, LKML, Linus Torvalds, Minchan Kim, KAMEZAWA Hiroyuki,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Mel Gorman
From: Andrea Arcangeli <aarcange@redhat.com>
Page migration requires rmap to be able to find all migration ptes
created by migration. If the second rmap_walk clearing migration PTEs
misses an entry, it is left dangling causing a BUG_ON to trigger during
fault. For example;
[ 511.201534] kernel BUG at include/linux/swapops.h:105!
[ 511.201534] invalid opcode: 0000 [#1] PREEMPT SMP
[ 511.201534] last sysfs file: /sys/block/sde/size
[ 511.201534] CPU 0
[ 511.201534] Modules linked in: kvm_amd kvm dm_crypt loop i2c_piix4 serio_raw tpm_tis shpchp evdev tpm i2c_core pci_hotplug tpm_bios wmi processor button ext3 jbd mbcache dm_mirror dm_region_hash dm_log dm_snapshot dm_mod raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 multipath linear md_mod sg sr_mod cdrom sd_mod ata_generic ahci libahci libata ide_pci_generic ehci_hcd ide_core r8169 mii ohci_hcd scsi_mod floppy thermal fan thermal_sys
[ 511.888526]
[ 511.888526] Pid: 20431, comm: date Not tainted 2.6.34-rc4-mm1-fix-swapops #6 GA-MA790GP-UD4H/GA-MA790GP-UD4H
[ 511.888526] RIP: 0010:[<ffffffff811094ff>] [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
[ 512.173545] RSP: 0018:ffff880037b979d8 EFLAGS: 00010246
[ 512.198503] RAX: ffffea0000000000 RBX: ffffea0001a2ba10 RCX: 0000000000029830
[ 512.329617] RDX: 0000000001a2ba10 RSI: ffffffff818264b8 RDI: 000000000ef45c3e
[ 512.380001] RBP: ffff880037b97a08 R08: ffff880078003f00 R09: ffff880037b979e8
[ 512.380001] R10: ffffffff8114ddaa R11: 0000000000000246 R12: 0000000037304000
[ 512.380001] R13: ffff88007a9ed5c8 R14: f800000000077a2e R15: 000000000ef45c3e
[ 512.380001] FS: 00007f3d346866e0(0000) GS:ffff880002200000(0000) knlGS:0000000000000000
[ 512.380001] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 512.380001] CR2: 00007fff6abec9c1 CR3: 0000000037a15000 CR4: 00000000000006f0
[ 512.380001] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 513.004775] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 513.068415] Process date (pid: 20431, threadinfo ffff880037b96000, task ffff880078003f00)
[ 513.068415] Stack:
[ 513.068415] ffff880037b97b98 ffff880037b97a18 ffff880037b97be8 0000000000000c00
[ 513.228068] <0> ffff880037304f60 00007fff6abec9c1 ffff880037b97aa8 ffffffff810e951a
[ 513.228068] <0> ffff880037b97a88 0000000000000246 0000000000000000 ffffffff8130c5c2
[ 513.228068] Call Trace:
[ 513.228068] [<ffffffff810e951a>] handle_mm_fault+0x3f8/0x76a
[ 513.228068] [<ffffffff8130c5c2>] ? do_page_fault+0x26a/0x46e
[ 513.228068] [<ffffffff8130c7a2>] do_page_fault+0x44a/0x46e
[ 513.720755] [<ffffffff8130875d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
[ 513.789278] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
[ 513.851506] [<ffffffff813099b5>] page_fault+0x25/0x30
[ 513.851506] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
[ 513.851506] [<ffffffff811c1e27>] ? strnlen_user+0x3f/0x57
[ 513.851506] [<ffffffff8114de33>] load_elf_binary+0x152a/0x192b
[ 513.851506] [<ffffffff8111329b>] search_binary_handler+0x173/0x313
[ 513.851506] [<ffffffff8114c909>] ? load_elf_binary+0x0/0x192b
[ 513.851506] [<ffffffff81114896>] do_execve+0x219/0x30a
[ 513.851506] [<ffffffff8111887f>] ? getname+0x14d/0x1b3
[ 513.851506] [<ffffffff8100a5c6>] sys_execve+0x43/0x5e
[ 514.483501] [<ffffffff8100320a>] stub_execve+0x6a/0xc0
[ 514.548357] Code: 74 05 83 f8 1f 75 68 48 b8 ff ff ff ff ff ff ff 07 48 21 c2 48 b8 00 00 00 00 00 ea ff ff 48 6b d2 38 48 8d 1c 02 f6 03 01 75 04 <0f> 0b eb fe 8b 4b 08 48 8d 73 08 85 c9 74 35 8d 41 01 89 4d e0
[ 514.704292] RIP [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
[ 514.808221] RSP <ffff880037b979d8>
[ 514.906179] ---[ end trace 4f88495edc224d6b ]---
This particular BUG_ON is caused by a race between shift_arg_pages and
migration. During exec, a temporary stack is created and later moved to
its final location. If migration selects a page within the temporary stack,
the page tables and migration PTE can be copied to the new location before
rmap_walk is able to find the copy. This leaves a dangling migration PTE
behind that later triggers the bug.
This patch fixes the problem by using two VMAs - one which covers the
temporary stack and the other which covers the new location. This guarantees
that rmap can always find the migration PTE even if it is copied while
rmap_walk is taking place.
[mel@csn.ul.ie: Tested and rewrote changelog]
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
fs/exec.c | 37 +++++++++++++++++++++++++++++++++----
1 files changed, 33 insertions(+), 4 deletions(-)
diff --git a/fs/exec.c b/fs/exec.c
index 725d7ef..fd0abff 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -55,6 +55,7 @@
#include <linux/fsnotify.h>
#include <linux/fs_struct.h>
#include <linux/pipe_fs_i.h>
+#include <linux/rmap.h>
#include <asm/uaccess.h>
#include <asm/mmu_context.h>
@@ -503,7 +504,9 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
unsigned long length = old_end - old_start;
unsigned long new_start = old_start - shift;
unsigned long new_end = old_end - shift;
+ unsigned long moved_length;
struct mmu_gather *tlb;
+ struct vm_area_struct *tmp_vma;
BUG_ON(new_start > new_end);
@@ -515,17 +518,43 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
return -EFAULT;
/*
+ * We need to create a fake temporary vma and index it in the
+ * anon_vma list in order to allow the pages to be reachable
+ * at all times by the rmap walk for migrate, while
+ * move_page_tables() is running.
+ */
+ tmp_vma = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
+ if (!tmp_vma)
+ return -ENOMEM;
+ *tmp_vma = *vma;
+ INIT_LIST_HEAD(&tmp_vma->anon_vma_chain);
+ /*
* cover the whole range: [new_start, old_end)
*/
- if (vma_adjust(vma, new_start, old_end, vma->vm_pgoff, NULL))
+ tmp_vma->vm_start = new_start;
+ /*
+ * The tmp_vma destination of the copy (with the new vm_start)
+ * has to be at the end of the anon_vma list for the rmap_walk
+ * to find the moved pages at all times.
+ */
+ if (unlikely(anon_vma_clone(tmp_vma, vma))) {
+ kmem_cache_free(vm_area_cachep, tmp_vma);
return -ENOMEM;
+ }
/*
* move the page tables downwards, on failure we rely on
* process cleanup to remove whatever mess we made.
*/
- if (length != move_page_tables(vma, old_start,
- vma, new_start, length))
+ moved_length = move_page_tables(vma, old_start,
+ vma, new_start, length);
+
+ vma->vm_start = new_start;
+ /* rmap walk will already find all pages using the new_start */
+ unlink_anon_vmas(tmp_vma);
+ kmem_cache_free(vm_area_cachep, tmp_vma);
+
+ if (length != moved_length)
return -ENOMEM;
lru_add_drain();
@@ -551,7 +580,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
/*
* Shrink the vma to just the new range. Always succeeds.
*/
- vma_adjust(vma, new_start, new_end, vma->vm_pgoff, NULL);
+ vma->vm_end = new_end;
return 0;
}
--
1.6.5
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-06 15:33 [PATCH 0/2] Fix migration races in rmap_walk() V6 Mel Gorman
@ 2010-05-06 15:33 ` Mel Gorman
0 siblings, 0 replies; 34+ messages in thread
From: Mel Gorman @ 2010-05-06 15:33 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux-MM, LKML, Minchan Kim, KAMEZAWA Hiroyuki, Christoph Lameter,
Andrea Arcangeli, Rik van Riel, Linus Torvalds, Peter Zijlstra,
Mel Gorman
From: Andrea Arcangeli <aarcange@redhat.com>
Page migration requires rmap to be able to find all migration ptes
created by migration. If the second rmap_walk clearing migration PTEs
misses an entry, it is left dangling causing a BUG_ON to trigger during
fault. For example;
[ 511.201534] kernel BUG at include/linux/swapops.h:105!
[ 511.201534] invalid opcode: 0000 [#1] PREEMPT SMP
[ 511.201534] last sysfs file: /sys/block/sde/size
[ 511.201534] CPU 0
[ 511.201534] Modules linked in: kvm_amd kvm dm_crypt loop i2c_piix4 serio_raw tpm_tis shpchp evdev tpm i2c_core pci_hotplug tpm_bios wmi processor button ext3 jbd mbcache dm_mirror dm_region_hash dm_log dm_snapshot dm_mod raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 multipath linear md_mod sg sr_mod cdrom sd_mod ata_generic ahci libahci libata ide_pci_generic ehci_hcd ide_core r8169 mii ohci_hcd scsi_mod floppy thermal fan thermal_sys
[ 511.888526]
[ 511.888526] Pid: 20431, comm: date Not tainted 2.6.34-rc4-mm1-fix-swapops #6 GA-MA790GP-UD4H/GA-MA790GP-UD4H
[ 511.888526] RIP: 0010:[<ffffffff811094ff>] [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
[ 512.173545] RSP: 0018:ffff880037b979d8 EFLAGS: 00010246
[ 512.198503] RAX: ffffea0000000000 RBX: ffffea0001a2ba10 RCX: 0000000000029830
[ 512.329617] RDX: 0000000001a2ba10 RSI: ffffffff818264b8 RDI: 000000000ef45c3e
[ 512.380001] RBP: ffff880037b97a08 R08: ffff880078003f00 R09: ffff880037b979e8
[ 512.380001] R10: ffffffff8114ddaa R11: 0000000000000246 R12: 0000000037304000
[ 512.380001] R13: ffff88007a9ed5c8 R14: f800000000077a2e R15: 000000000ef45c3e
[ 512.380001] FS: 00007f3d346866e0(0000) GS:ffff880002200000(0000) knlGS:0000000000000000
[ 512.380001] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 512.380001] CR2: 00007fff6abec9c1 CR3: 0000000037a15000 CR4: 00000000000006f0
[ 512.380001] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 513.004775] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 513.068415] Process date (pid: 20431, threadinfo ffff880037b96000, task ffff880078003f00)
[ 513.068415] Stack:
[ 513.068415] ffff880037b97b98 ffff880037b97a18 ffff880037b97be8 0000000000000c00
[ 513.228068] <0> ffff880037304f60 00007fff6abec9c1 ffff880037b97aa8 ffffffff810e951a
[ 513.228068] <0> ffff880037b97a88 0000000000000246 0000000000000000 ffffffff8130c5c2
[ 513.228068] Call Trace:
[ 513.228068] [<ffffffff810e951a>] handle_mm_fault+0x3f8/0x76a
[ 513.228068] [<ffffffff8130c5c2>] ? do_page_fault+0x26a/0x46e
[ 513.228068] [<ffffffff8130c7a2>] do_page_fault+0x44a/0x46e
[ 513.720755] [<ffffffff8130875d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
[ 513.789278] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
[ 513.851506] [<ffffffff813099b5>] page_fault+0x25/0x30
[ 513.851506] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
[ 513.851506] [<ffffffff811c1e27>] ? strnlen_user+0x3f/0x57
[ 513.851506] [<ffffffff8114de33>] load_elf_binary+0x152a/0x192b
[ 513.851506] [<ffffffff8111329b>] search_binary_handler+0x173/0x313
[ 513.851506] [<ffffffff8114c909>] ? load_elf_binary+0x0/0x192b
[ 513.851506] [<ffffffff81114896>] do_execve+0x219/0x30a
[ 513.851506] [<ffffffff8111887f>] ? getname+0x14d/0x1b3
[ 513.851506] [<ffffffff8100a5c6>] sys_execve+0x43/0x5e
[ 514.483501] [<ffffffff8100320a>] stub_execve+0x6a/0xc0
[ 514.548357] Code: 74 05 83 f8 1f 75 68 48 b8 ff ff ff ff ff ff ff 07 48 21 c2 48 b8 00 00 00 00 00 ea ff ff 48 6b d2 38 48 8d 1c 02 f6 03 01 75 04 <0f> 0b eb fe 8b 4b 08 48 8d 73 08 85 c9 74 35 8d 41 01 89 4d e0
[ 514.704292] RIP [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
[ 514.808221] RSP <ffff880037b979d8>
[ 514.906179] ---[ end trace 4f88495edc224d6b ]---
This particular BUG_ON is caused by a race between shift_arg_pages and
migration. During exec, a temporary stack is created and later moved to
its final location. If migration selects a page within the temporary stack,
the page tables and migration PTE can be copied to the new location before
rmap_walk is able to find the copy. This leaves a dangling migration PTE
behind that later triggers the bug.
This patch fixes the problem by using two VMAs - one which covers the
temporary stack and the other which covers the new location. This guarantees
that rmap can always find the migration PTE even if it is copied while
rmap_walk is taking place.
[mel@csn.ul.ie: Tested and rewrote changelog]
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
fs/exec.c | 37 +++++++++++++++++++++++++++++++++----
1 files changed, 33 insertions(+), 4 deletions(-)
diff --git a/fs/exec.c b/fs/exec.c
index 725d7ef..fd0abff 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -55,6 +55,7 @@
#include <linux/fsnotify.h>
#include <linux/fs_struct.h>
#include <linux/pipe_fs_i.h>
+#include <linux/rmap.h>
#include <asm/uaccess.h>
#include <asm/mmu_context.h>
@@ -503,7 +504,9 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
unsigned long length = old_end - old_start;
unsigned long new_start = old_start - shift;
unsigned long new_end = old_end - shift;
+ unsigned long moved_length;
struct mmu_gather *tlb;
+ struct vm_area_struct *tmp_vma;
BUG_ON(new_start > new_end);
@@ -515,17 +518,43 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
return -EFAULT;
/*
+ * We need to create a fake temporary vma and index it in the
+ * anon_vma list in order to allow the pages to be reachable
+ * at all times by the rmap walk for migrate, while
+ * move_page_tables() is running.
+ */
+ tmp_vma = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
+ if (!tmp_vma)
+ return -ENOMEM;
+ *tmp_vma = *vma;
+ INIT_LIST_HEAD(&tmp_vma->anon_vma_chain);
+ /*
* cover the whole range: [new_start, old_end)
*/
- if (vma_adjust(vma, new_start, old_end, vma->vm_pgoff, NULL))
+ tmp_vma->vm_start = new_start;
+ /*
+ * The tmp_vma destination of the copy (with the new vm_start)
+ * has to be at the end of the anon_vma list for the rmap_walk
+ * to find the moved pages at all times.
+ */
+ if (unlikely(anon_vma_clone(tmp_vma, vma))) {
+ kmem_cache_free(vm_area_cachep, tmp_vma);
return -ENOMEM;
+ }
/*
* move the page tables downwards, on failure we rely on
* process cleanup to remove whatever mess we made.
*/
- if (length != move_page_tables(vma, old_start,
- vma, new_start, length))
+ moved_length = move_page_tables(vma, old_start,
+ vma, new_start, length);
+
+ vma->vm_start = new_start;
+ /* rmap walk will already find all pages using the new_start */
+ unlink_anon_vmas(tmp_vma);
+ kmem_cache_free(vm_area_cachep, tmp_vma);
+
+ if (length != moved_length)
return -ENOMEM;
lru_add_drain();
@@ -551,7 +580,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
/*
* Shrink the vma to just the new range. Always succeeds.
*/
- vma_adjust(vma, new_start, new_end, vma->vm_pgoff, NULL);
+ vma->vm_end = new_end;
return 0;
}
--
1.6.5
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 0/2] Fix migration races in rmap_walk() V7
@ 2010-05-06 23:20 Mel Gorman
2010-05-06 23:20 ` [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information Mel Gorman
` (2 more replies)
0 siblings, 3 replies; 34+ messages in thread
From: Mel Gorman @ 2010-05-06 23:20 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux-MM, LKML, Minchan Kim, KAMEZAWA Hiroyuki, Christoph Lameter,
Andrea Arcangeli, Rik van Riel, Linus Torvalds, Peter Zijlstra,
Mel Gorman
No major change since V6, just looks nicer.
Changelog since V6
o Make the nested anon_vma lock taking prettier
Changelog since V5
o Have rmap_walk take anon_vma locks in order starting from the "root"
o Ensure that mm_take_all_locks locks VMAs in the same order
Changelog since V4
o Switch back anon_vma locking to put bulk of locking in rmap_walk
o Fix anon_vma lock ordering in exec vs migration race
Changelog since V3
o Rediff against the latest upstream tree
o Improve the patch changelog a little (thanks Peterz)
Changelog since V2
o Drop fork changes
o Avoid pages in temporary stacks during exec instead of migration pte
lazy cleanup
o Drop locking-related patch and replace with Rik's
Changelog since V1
o Handle the execve race
o Be sure that rmap_walk() releases the correct VMA lock
o Hold the anon_vma lock for the address lookup and the page remap
o Add reviewed-bys
Broadly speaking, migration works by locking a page, unmapping it, putting
a migration PTE in place that looks like a swap entry, copying the page and
remapping the page removing the old migration PTE before unlocking the page.
If a fault occurs, the faulting process waits until migration completes.
The problem is that there are some races that either allow migration PTEs
to be left left behind. Migration still completes and the page is unlocked
but later a fault will call migration_entry_to_page() and BUG() because the
page is not locked. It's not possible to just clean up the migration PTE
because the page it points to has been potentially freed and reused. This
series aims to close the races.
Patch 1 of this series is about the of locking of anon_vma in migration versus
vma_adjust. While I am not aware of any reproduction cases, it is potentially
racy. This patch is an alternative to Rik's "heavy lock" approach posted
at http://lkml.org/lkml/2010/5/3/155. With the patch, rmap_walk finds the
"root" anon_vma and starts locking from there, locking each new anon_vma
as it finds it. As long as the order is preserved, there is no deadlock.
In vma_adjust, the anon_vma locks are acquired under similar conditions
to 2.6.33 so that walkers will block until VMA changes are complete. The
rmap_walk changes potentially slows down migration and aspects of page
reclaim a little but they are the less important path.
Patch 2 of this series addresses the swapops bug reported that is a race
between migration and execve where pages get migrated from the temporary
stack before it is moved. To avoid migration PTEs being left behind,
a temporary VMA is put in place so that a migration PTE in either the
temporary stack or the relocated stack can be found.
The reproduction case for the races was as follows;
1. Run kernel compilation in a loop
2. Start four processes, each of which creates one mapping. The three stress
different aspects of the problem. The operations they undertake are;
a) Forks a hundred children, each of which faults the mapping
Purpose: stress tests migration pte removal
b) Forks a hundred children, each which punches a hole in the mapping
and faults what remains
Purpose: stress test VMA manipulations during migration
c) Forks a hundred children, each of which execs and calls echo
Purpose: stress test the execve race
d) Size the mapping to be 1.5 times physical memory. Constantly
memset it
Purpose: stress swapping
3. Constantly compact memory using /proc/sys/vm/compact_memory so migration
is active all the time. In theory, you could also force this using
sys_move_pages or memory hot-remove but it'd be nowhere near as easy
to test.
Compaction is the easiest way to trigger these bugs which is not going to
be in 2.6.34 but in theory the problem also affects memory hot-remove.
There were some concerns with patch 2 that performance would be impacted. To
check if this was the case I ran kernbench, aim9 and sysbench. AIM9 in
particular was of interest as it has an exec microbenchmark.
kernbench-vanilla fixraces-v5r1
Elapsed mean 103.40 ( 0.00%) 103.35 ( 0.05%)
Elapsed stddev 0.09 ( 0.00%) 0.13 (-55.72%)
User mean 313.50 ( 0.00%) 313.15 ( 0.11%)
User stddev 0.61 ( 0.00%) 0.20 (66.70%)
System mean 55.50 ( 0.00%) 55.85 (-0.64%)
System stddev 0.48 ( 0.00%) 0.15 (68.98%)
CPU mean 356.25 ( 0.00%) 356.50 (-0.07%)
CPU stddev 0.43 ( 0.00%) 0.50 (-15.47%)
Nothing special there and kernbench is fork+exec heavy. The patched kernel
is slightly faster on wall time but it's well within the noise. System time
is slightly slower but again, it's within the noise.
AIM9
aim9-vanilla fixraces-v5r1
creat-clo 116813.86 ( 0.00%) 117980.34 ( 0.99%)
page_test 270923.33 ( 0.00%) 268668.56 (-0.84%)
brk_test 2551558.07 ( 0.00%) 2649450.00 ( 3.69%)
signal_test 279866.67 ( 0.00%) 279533.33 (-0.12%)
exec_test 226.67 ( 0.00%) 232.67 ( 2.58%)
fork_test 4261.91 ( 0.00%) 4110.98 (-3.67%)
link_test 53534.78 ( 0.00%) 54076.49 ( 1.00%)
So, here exec and fork aren't showing up major worries. exec is faster but
these tests can be so sensitive to starting conditions that I tend not to
read much into them unless there are major differences.
SYSBENCH
sysbench-vanilla fixraces-v5r1
1 14177.73 ( 0.00%) 14218.41 ( 0.29%)
2 27647.23 ( 0.00%) 27774.14 ( 0.46%)
3 31395.69 ( 0.00%) 31499.95 ( 0.33%)
4 49866.54 ( 0.00%) 49713.49 (-0.31%)
5 49919.58 ( 0.00%) 49524.21 (-0.80%)
6 49532.97 ( 0.00%) 49397.60 (-0.27%)
7 49465.79 ( 0.00%) 49384.14 (-0.17%)
8 49483.33 ( 0.00%) 49186.49 (-0.60%)
These figures also show no differences worth talking about.
While the extra allocation in patch 2 would appear to slow down exec somewhat,
it's not by any amount that matters. As it is in exec, it means that anon_vmas
have likely been freed very recently so the allocation will be cache-hot and
cpu-local. It is possible to special-case migration to avoid migrating pages
in the temporary stack, but fixing it in exec is a more maintainable approach.
fs/exec.c | 37 ++++++++++++++++++++--
include/linux/rmap.h | 2 +
mm/ksm.c | 20 ++++++++++--
mm/mmap.c | 14 +++++++-
mm/rmap.c | 81 +++++++++++++++++++++++++++++++++++++++++++++-----
5 files changed, 137 insertions(+), 17 deletions(-)
Andrea Arcangeli (1):
mm,migration: Fix race between shift_arg_pages and rmap_walk by
guaranteeing rmap_walk finds PTEs created within the temporary
stack
Mel Gorman (1):
mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA
information
fs/exec.c | 37 +++++++++++++++++--
include/linux/rmap.h | 4 ++
mm/ksm.c | 13 ++++++-
mm/mmap.c | 14 ++++++-
mm/rmap.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++----
5 files changed, 151 insertions(+), 14 deletions(-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information
2010-05-06 23:20 [PATCH 0/2] Fix migration races in rmap_walk() V7 Mel Gorman
@ 2010-05-06 23:20 ` Mel Gorman
2010-05-07 0:56 ` KAMEZAWA Hiroyuki
2010-05-08 15:39 ` Andrea Arcangeli
2010-05-06 23:20 ` [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack Mel Gorman
2010-05-07 8:13 ` [PATCH 0/2] Fix migration races in rmap_walk() V7 KAMEZAWA Hiroyuki
2 siblings, 2 replies; 34+ messages in thread
From: Mel Gorman @ 2010-05-06 23:20 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux-MM, LKML, Minchan Kim, KAMEZAWA Hiroyuki, Christoph Lameter,
Andrea Arcangeli, Rik van Riel, Linus Torvalds, Peter Zijlstra,
Mel Gorman
vma_adjust() is updating anon VMA information without locks being taken.
In contrast, file-backed mappings use the i_mmap_lock and this lack of
locking can result in races with users of rmap_walk such as page migration.
vma_address() can return -EFAULT for an address that will soon be valid.
For migration, this potentially leaves a dangling migration PTE behind
which can later cause a BUG_ON to trigger when the page is faulted in.
With the recent anon_vma changes, there can be more than one anon_vma->lock
to take when walking a list of anon_vma_chains but as the order of anon_vmas
cannot be guaranteed, rmap_walk cannot take multiple locks without
potentially deadlocking.
To resolve this problem, this patch has rmap_walk walk the anon_vma_chain
list but always starting from the "root" anon_vma which is the oldest
anon_vma in the list. It starts by locking the anon_vma lock associated
with a page. It then finds the "root" anon_vma using the anon_vma_chains
"same_vma" list as it is strictly ordered. The root anon_vma lock is taken
and rmap_walk traverses the list. This allows multiple locks to be taken
as the list is always traversed in the same direction.
As spotted by Rik, to avoid any deadlocks versus mmu_notify, the order that
anon_vmas is locked in by mm_take_all_locks is reversed by this patch so that
both rmap_walk and mm_take_all_locks lock anon_vmas in the order of old->new.
For vma_adjust(), the locking behaviour prior to the anon_vma is restored
so that rmap_walk() can be sure of the integrity of the VMA information and
lists when the anon_vma lock is held. With this patch, the vma->anon_vma->lock
is taken if
a) If there is any overlap with the next VMA due to the adjustment
b) If there is a new VMA is being inserted into the address space
c) If the start of the VMA is being changed so that the
relationship between vm_start and vm_pgoff is preserved
for vma_address()
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
include/linux/rmap.h | 4 ++
mm/ksm.c | 13 ++++++-
mm/mmap.c | 14 ++++++-
mm/rmap.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++----
4 files changed, 118 insertions(+), 10 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 7721674..1dc949f 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -121,6 +121,10 @@ int anon_vma_prepare(struct vm_area_struct *);
void unlink_anon_vmas(struct vm_area_struct *);
int anon_vma_clone(struct vm_area_struct *, struct vm_area_struct *);
int anon_vma_fork(struct vm_area_struct *, struct vm_area_struct *);
+struct anon_vma *anon_vma_lock_nested(struct anon_vma *prev,
+ struct anon_vma *next, struct anon_vma *root);
+struct anon_vma *anon_vma_lock_root(struct anon_vma *anon_vma);
+struct anon_vma *page_anon_vma_lock_root(struct page *page);
void __anon_vma_link(struct vm_area_struct *);
void anon_vma_free(struct anon_vma *);
diff --git a/mm/ksm.c b/mm/ksm.c
index 3666d43..1db8656 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1655,6 +1655,7 @@ int rmap_walk_ksm(struct page *page, int (*rmap_one)(struct page *,
{
struct stable_node *stable_node;
struct hlist_node *hlist;
+ struct anon_vma *nested_anon_vma = NULL;
struct rmap_item *rmap_item;
int ret = SWAP_AGAIN;
int search_new_forks = 0;
@@ -1671,9 +1672,16 @@ again:
struct anon_vma_chain *vmac;
struct vm_area_struct *vma;
- spin_lock(&anon_vma->lock);
+ anon_vma = anon_vma_lock_root(anon_vma);
+ if (nested_anon_vma) {
+ spin_unlock(&nested_anon_vma->lock);
+ nested_anon_vma = NULL;
+ }
list_for_each_entry(vmac, &anon_vma->head, same_anon_vma) {
vma = vmac->vma;
+ nested_anon_vma = anon_vma_lock_nested(nested_anon_vma,
+ vma->anon_vma, anon_vma);
+
if (rmap_item->address < vma->vm_start ||
rmap_item->address >= vma->vm_end)
continue;
@@ -1697,6 +1705,9 @@ again:
if (!search_new_forks++)
goto again;
out:
+ if (nested_anon_vma)
+ spin_unlock(&nested_anon_vma->lock);
+
return ret;
}
diff --git a/mm/mmap.c b/mm/mmap.c
index f90ea92..b447d5b 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -505,6 +505,7 @@ int vma_adjust(struct vm_area_struct *vma, unsigned long start,
struct vm_area_struct *next = vma->vm_next;
struct vm_area_struct *importer = NULL;
struct address_space *mapping = NULL;
+ struct anon_vma *anon_vma = NULL;
struct prio_tree_root *root = NULL;
struct file *file = vma->vm_file;
long adjust_next = 0;
@@ -578,6 +579,11 @@ again: remove_next = 1 + (end > next->vm_end);
}
}
+ if (vma->anon_vma && (insert || importer || start != vma->vm_start)) {
+ anon_vma = vma->anon_vma;
+ spin_lock(&anon_vma->lock);
+ }
+
if (root) {
flush_dcache_mmap_lock(mapping);
vma_prio_tree_remove(vma, root);
@@ -620,6 +626,9 @@ again: remove_next = 1 + (end > next->vm_end);
if (mapping)
spin_unlock(&mapping->i_mmap_lock);
+ if (anon_vma)
+ spin_unlock(&anon_vma->lock);
+
if (remove_next) {
if (file) {
fput(file);
@@ -2556,8 +2565,9 @@ int mm_take_all_locks(struct mm_struct *mm)
for (vma = mm->mmap; vma; vma = vma->vm_next) {
if (signal_pending(current))
goto out_unlock;
+ /* Lock the anon_vmas in the same order rmap_walk would */
if (vma->anon_vma)
- list_for_each_entry(avc, &vma->anon_vma_chain, same_vma)
+ list_for_each_entry_reverse(avc, &vma->anon_vma_chain, same_vma)
vm_lock_anon_vma(mm, avc->anon_vma);
}
@@ -2620,7 +2630,7 @@ void mm_drop_all_locks(struct mm_struct *mm)
for (vma = mm->mmap; vma; vma = vma->vm_next) {
if (vma->anon_vma)
- list_for_each_entry(avc, &vma->anon_vma_chain, same_vma)
+ list_for_each_entry_reverse(avc, &vma->anon_vma_chain, same_vma)
vm_unlock_anon_vma(avc->anon_vma);
if (vma->vm_file && vma->vm_file->f_mapping)
vm_unlock_mapping(vma->vm_file->f_mapping);
diff --git a/mm/rmap.c b/mm/rmap.c
index 85f203e..2e65a75 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -236,6 +236,81 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
return -ENOMEM;
}
+/*
+ * When walking an anon_vma_chain and locking each anon_vma encountered,
+ * this function is responsible for checking if the next VMA is the
+ * same as the root, locking it if not and released the previous lock
+ * if necessary.
+ *
+ * It is assumed the caller has locked the root anon_vma
+ */
+struct anon_vma *anon_vma_lock_nested(struct anon_vma *prev,
+ struct anon_vma *next, struct anon_vma *root)
+{
+ if (prev)
+ spin_unlock(&prev->lock);
+ if (next == root)
+ return NULL;
+ spin_lock_nested(&next->lock, SINGLE_DEPTH_NESTING);
+ return next;
+}
+
+/*
+ * Given an anon_vma, find the root of the chain, lock it and return the
+ * root. This must be called with the rcu_read_lock held
+ */
+struct anon_vma *anon_vma_lock_root(struct anon_vma *anon_vma)
+{
+ struct anon_vma *root_anon_vma;
+ struct anon_vma_chain *avc, *root_avc;
+ struct vm_area_struct *vma;
+
+ /* Lock the same_anon_vma list and make sure we are on a chain */
+ spin_lock(&anon_vma->lock);
+ if (list_empty(&anon_vma->head)) {
+ spin_unlock(&anon_vma->lock);
+ return NULL;
+ }
+
+ /*
+ * Get the root anon_vma on the list by depending on the ordering
+ * of the same_vma list setup by __page_set_anon_rmap. Basically
+ * we are doing
+ *
+ * local anon_vma -> local vma -> root vma -> root anon_vma
+ */
+ avc = list_first_entry(&anon_vma->head, struct anon_vma_chain, same_anon_vma);
+ vma = avc->vma;
+ root_avc = list_entry(vma->anon_vma_chain.prev, struct anon_vma_chain, same_vma);
+ root_anon_vma = root_avc->anon_vma;
+
+ /* Get the lock of the root anon_vma */
+ if (anon_vma != root_anon_vma) {
+ VM_BUG_ON(!rcu_read_lock_held());
+ spin_unlock(&anon_vma->lock);
+ spin_lock(&root_anon_vma->lock);
+ }
+
+ return root_anon_vma;
+}
+
+/*
+ * From the anon_vma associated with this page, find and lock the
+ * deepest anon_vma on the list. This allows multiple anon_vma locks
+ * to be taken by guaranteeing the locks are taken in the same order
+ */
+struct anon_vma *page_anon_vma_lock_root(struct page *page)
+{
+ struct anon_vma *anon_vma;
+
+ /* Get the local anon_vma */
+ anon_vma = page_anon_vma(page);
+ if (!anon_vma)
+ return NULL;
+
+ return anon_vma_lock_root(anon_vma);
+}
+
static void anon_vma_unlink(struct anon_vma_chain *anon_vma_chain)
{
struct anon_vma *anon_vma = anon_vma_chain->anon_vma;
@@ -326,7 +401,7 @@ void page_unlock_anon_vma(struct anon_vma *anon_vma)
* Returns virtual address or -EFAULT if page's index/offset is not
* within the range mapped the @vma.
*/
-static inline unsigned long
+static noinline unsigned long
vma_address(struct page *page, struct vm_area_struct *vma)
{
pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT);
@@ -1359,6 +1434,7 @@ static int rmap_walk_anon(struct page *page, int (*rmap_one)(struct page *,
struct vm_area_struct *, unsigned long, void *), void *arg)
{
struct anon_vma *anon_vma;
+ struct anon_vma *nested_anon_vma = NULL;
struct anon_vma_chain *avc;
int ret = SWAP_AGAIN;
@@ -1368,19 +1444,26 @@ static int rmap_walk_anon(struct page *page, int (*rmap_one)(struct page *,
* are holding mmap_sem. Users without mmap_sem are required to
* take a reference count to prevent the anon_vma disappearing
*/
- anon_vma = page_anon_vma(page);
+ anon_vma = page_anon_vma_lock_root(page);
if (!anon_vma)
return ret;
- spin_lock(&anon_vma->lock);
list_for_each_entry(avc, &anon_vma->head, same_anon_vma) {
struct vm_area_struct *vma = avc->vma;
- unsigned long address = vma_address(page, vma);
- if (address == -EFAULT)
- continue;
- ret = rmap_one(page, vma, address, arg);
+ unsigned long address;
+
+ nested_anon_vma = anon_vma_lock_nested(nested_anon_vma,
+ vma->anon_vma, anon_vma);
+ address = vma_address(page, vma);
+ if (address != -EFAULT)
+ ret = rmap_one(page, vma, address, arg);
+
if (ret != SWAP_AGAIN)
break;
}
+
+ if (nested_anon_vma)
+ spin_unlock(&nested_anon_vma->lock);
+
spin_unlock(&anon_vma->lock);
return ret;
}
--
1.6.5
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-06 23:20 [PATCH 0/2] Fix migration races in rmap_walk() V7 Mel Gorman
2010-05-06 23:20 ` [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information Mel Gorman
@ 2010-05-06 23:20 ` Mel Gorman
2010-05-07 1:40 ` Linus Torvalds
2010-05-07 8:13 ` [PATCH 0/2] Fix migration races in rmap_walk() V7 KAMEZAWA Hiroyuki
2 siblings, 1 reply; 34+ messages in thread
From: Mel Gorman @ 2010-05-06 23:20 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux-MM, LKML, Minchan Kim, KAMEZAWA Hiroyuki, Christoph Lameter,
Andrea Arcangeli, Rik van Riel, Linus Torvalds, Peter Zijlstra,
Mel Gorman
From: Andrea Arcangeli <aarcange@redhat.com>
Page migration requires rmap to be able to find all migration ptes
created by migration. If the second rmap_walk clearing migration PTEs
misses an entry, it is left dangling causing a BUG_ON to trigger during
fault. For example;
[ 511.201534] kernel BUG at include/linux/swapops.h:105!
[ 511.201534] invalid opcode: 0000 [#1] PREEMPT SMP
[ 511.201534] last sysfs file: /sys/block/sde/size
[ 511.201534] CPU 0
[ 511.201534] Modules linked in: kvm_amd kvm dm_crypt loop i2c_piix4 serio_raw tpm_tis shpchp evdev tpm i2c_core pci_hotplug tpm_bios wmi processor button ext3 jbd mbcache dm_mirror dm_region_hash dm_log dm_snapshot dm_mod raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 multipath linear md_mod sg sr_mod cdrom sd_mod ata_generic ahci libahci libata ide_pci_generic ehci_hcd ide_core r8169 mii ohci_hcd scsi_mod floppy thermal fan thermal_sys
[ 511.888526]
[ 511.888526] Pid: 20431, comm: date Not tainted 2.6.34-rc4-mm1-fix-swapops #6 GA-MA790GP-UD4H/GA-MA790GP-UD4H
[ 511.888526] RIP: 0010:[<ffffffff811094ff>] [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
[ 512.173545] RSP: 0018:ffff880037b979d8 EFLAGS: 00010246
[ 512.198503] RAX: ffffea0000000000 RBX: ffffea0001a2ba10 RCX: 0000000000029830
[ 512.329617] RDX: 0000000001a2ba10 RSI: ffffffff818264b8 RDI: 000000000ef45c3e
[ 512.380001] RBP: ffff880037b97a08 R08: ffff880078003f00 R09: ffff880037b979e8
[ 512.380001] R10: ffffffff8114ddaa R11: 0000000000000246 R12: 0000000037304000
[ 512.380001] R13: ffff88007a9ed5c8 R14: f800000000077a2e R15: 000000000ef45c3e
[ 512.380001] FS: 00007f3d346866e0(0000) GS:ffff880002200000(0000) knlGS:0000000000000000
[ 512.380001] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 512.380001] CR2: 00007fff6abec9c1 CR3: 0000000037a15000 CR4: 00000000000006f0
[ 512.380001] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 513.004775] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 513.068415] Process date (pid: 20431, threadinfo ffff880037b96000, task ffff880078003f00)
[ 513.068415] Stack:
[ 513.068415] ffff880037b97b98 ffff880037b97a18 ffff880037b97be8 0000000000000c00
[ 513.228068] <0> ffff880037304f60 00007fff6abec9c1 ffff880037b97aa8 ffffffff810e951a
[ 513.228068] <0> ffff880037b97a88 0000000000000246 0000000000000000 ffffffff8130c5c2
[ 513.228068] Call Trace:
[ 513.228068] [<ffffffff810e951a>] handle_mm_fault+0x3f8/0x76a
[ 513.228068] [<ffffffff8130c5c2>] ? do_page_fault+0x26a/0x46e
[ 513.228068] [<ffffffff8130c7a2>] do_page_fault+0x44a/0x46e
[ 513.720755] [<ffffffff8130875d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
[ 513.789278] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
[ 513.851506] [<ffffffff813099b5>] page_fault+0x25/0x30
[ 513.851506] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
[ 513.851506] [<ffffffff811c1e27>] ? strnlen_user+0x3f/0x57
[ 513.851506] [<ffffffff8114de33>] load_elf_binary+0x152a/0x192b
[ 513.851506] [<ffffffff8111329b>] search_binary_handler+0x173/0x313
[ 513.851506] [<ffffffff8114c909>] ? load_elf_binary+0x0/0x192b
[ 513.851506] [<ffffffff81114896>] do_execve+0x219/0x30a
[ 513.851506] [<ffffffff8111887f>] ? getname+0x14d/0x1b3
[ 513.851506] [<ffffffff8100a5c6>] sys_execve+0x43/0x5e
[ 514.483501] [<ffffffff8100320a>] stub_execve+0x6a/0xc0
[ 514.548357] Code: 74 05 83 f8 1f 75 68 48 b8 ff ff ff ff ff ff ff 07 48 21 c2 48 b8 00 00 00 00 00 ea ff ff 48 6b d2 38 48 8d 1c 02 f6 03 01 75 04 <0f> 0b eb fe 8b 4b 08 48 8d 73 08 85 c9 74 35 8d 41 01 89 4d e0
[ 514.704292] RIP [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
[ 514.808221] RSP <ffff880037b979d8>
[ 514.906179] ---[ end trace 4f88495edc224d6b ]---
This particular BUG_ON is caused by a race between shift_arg_pages and
migration. During exec, a temporary stack is created and later moved to
its final location. If migration selects a page within the temporary stack,
the page tables and migration PTE can be copied to the new location before
rmap_walk is able to find the copy. This leaves a dangling migration PTE
behind that later triggers the bug.
This patch fixes the problem by using two VMAs - one which covers the
temporary stack and the other which covers the new location. This guarantees
that rmap can always find the migration PTE even if it is copied while
rmap_walk is taking place.
[mel@csn.ul.ie: Tested and rewrote changelog]
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
fs/exec.c | 37 +++++++++++++++++++++++++++++++++----
1 files changed, 33 insertions(+), 4 deletions(-)
diff --git a/fs/exec.c b/fs/exec.c
index 725d7ef..fd0abff 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -55,6 +55,7 @@
#include <linux/fsnotify.h>
#include <linux/fs_struct.h>
#include <linux/pipe_fs_i.h>
+#include <linux/rmap.h>
#include <asm/uaccess.h>
#include <asm/mmu_context.h>
@@ -503,7 +504,9 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
unsigned long length = old_end - old_start;
unsigned long new_start = old_start - shift;
unsigned long new_end = old_end - shift;
+ unsigned long moved_length;
struct mmu_gather *tlb;
+ struct vm_area_struct *tmp_vma;
BUG_ON(new_start > new_end);
@@ -515,17 +518,43 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
return -EFAULT;
/*
+ * We need to create a fake temporary vma and index it in the
+ * anon_vma list in order to allow the pages to be reachable
+ * at all times by the rmap walk for migrate, while
+ * move_page_tables() is running.
+ */
+ tmp_vma = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
+ if (!tmp_vma)
+ return -ENOMEM;
+ *tmp_vma = *vma;
+ INIT_LIST_HEAD(&tmp_vma->anon_vma_chain);
+ /*
* cover the whole range: [new_start, old_end)
*/
- if (vma_adjust(vma, new_start, old_end, vma->vm_pgoff, NULL))
+ tmp_vma->vm_start = new_start;
+ /*
+ * The tmp_vma destination of the copy (with the new vm_start)
+ * has to be at the end of the anon_vma list for the rmap_walk
+ * to find the moved pages at all times.
+ */
+ if (unlikely(anon_vma_clone(tmp_vma, vma))) {
+ kmem_cache_free(vm_area_cachep, tmp_vma);
return -ENOMEM;
+ }
/*
* move the page tables downwards, on failure we rely on
* process cleanup to remove whatever mess we made.
*/
- if (length != move_page_tables(vma, old_start,
- vma, new_start, length))
+ moved_length = move_page_tables(vma, old_start,
+ vma, new_start, length);
+
+ vma->vm_start = new_start;
+ /* rmap walk will already find all pages using the new_start */
+ unlink_anon_vmas(tmp_vma);
+ kmem_cache_free(vm_area_cachep, tmp_vma);
+
+ if (length != moved_length)
return -ENOMEM;
lru_add_drain();
@@ -551,7 +580,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
/*
* Shrink the vma to just the new range. Always succeeds.
*/
- vma_adjust(vma, new_start, new_end, vma->vm_pgoff, NULL);
+ vma->vm_end = new_end;
return 0;
}
--
1.6.5
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information
2010-05-06 23:20 ` [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information Mel Gorman
@ 2010-05-07 0:56 ` KAMEZAWA Hiroyuki
2010-05-07 16:26 ` Mel Gorman
2010-05-08 15:39 ` Andrea Arcangeli
1 sibling, 1 reply; 34+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-05-07 0:56 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, Linux-MM, LKML, Minchan Kim, Christoph Lameter,
Andrea Arcangeli, Rik van Riel, Linus Torvalds, Peter Zijlstra
On Fri, 7 May 2010 00:20:52 +0100
Mel Gorman <mel@csn.ul.ie> wrote:
> vma_adjust() is updating anon VMA information without locks being taken.
> In contrast, file-backed mappings use the i_mmap_lock and this lack of
> locking can result in races with users of rmap_walk such as page migration.
> vma_address() can return -EFAULT for an address that will soon be valid.
> For migration, this potentially leaves a dangling migration PTE behind
> which can later cause a BUG_ON to trigger when the page is faulted in.
>
> With the recent anon_vma changes, there can be more than one anon_vma->lock
> to take when walking a list of anon_vma_chains but as the order of anon_vmas
> cannot be guaranteed, rmap_walk cannot take multiple locks without
> potentially deadlocking.
>
> To resolve this problem, this patch has rmap_walk walk the anon_vma_chain
> list but always starting from the "root" anon_vma which is the oldest
> anon_vma in the list. It starts by locking the anon_vma lock associated
> with a page. It then finds the "root" anon_vma using the anon_vma_chains
> "same_vma" list as it is strictly ordered. The root anon_vma lock is taken
> and rmap_walk traverses the list. This allows multiple locks to be taken
> as the list is always traversed in the same direction.
>
> As spotted by Rik, to avoid any deadlocks versus mmu_notify, the order that
> anon_vmas is locked in by mm_take_all_locks is reversed by this patch so that
> both rmap_walk and mm_take_all_locks lock anon_vmas in the order of old->new.
>
> For vma_adjust(), the locking behaviour prior to the anon_vma is restored
> so that rmap_walk() can be sure of the integrity of the VMA information and
> lists when the anon_vma lock is held. With this patch, the vma->anon_vma->lock
> is taken if
>
> a) If there is any overlap with the next VMA due to the adjustment
> b) If there is a new VMA is being inserted into the address space
> c) If the start of the VMA is being changed so that the
> relationship between vm_start and vm_pgoff is preserved
> for vma_address()
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> ---
> include/linux/rmap.h | 4 ++
> mm/ksm.c | 13 ++++++-
> mm/mmap.c | 14 ++++++-
> mm/rmap.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++----
> 4 files changed, 118 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index 7721674..1dc949f 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -121,6 +121,10 @@ int anon_vma_prepare(struct vm_area_struct *);
> void unlink_anon_vmas(struct vm_area_struct *);
> int anon_vma_clone(struct vm_area_struct *, struct vm_area_struct *);
> int anon_vma_fork(struct vm_area_struct *, struct vm_area_struct *);
> +struct anon_vma *anon_vma_lock_nested(struct anon_vma *prev,
> + struct anon_vma *next, struct anon_vma *root);
> +struct anon_vma *anon_vma_lock_root(struct anon_vma *anon_vma);
> +struct anon_vma *page_anon_vma_lock_root(struct page *page);
> void __anon_vma_link(struct vm_area_struct *);
> void anon_vma_free(struct anon_vma *);
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 3666d43..1db8656 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1655,6 +1655,7 @@ int rmap_walk_ksm(struct page *page, int (*rmap_one)(struct page *,
> {
> struct stable_node *stable_node;
> struct hlist_node *hlist;
> + struct anon_vma *nested_anon_vma = NULL;
> struct rmap_item *rmap_item;
> int ret = SWAP_AGAIN;
> int search_new_forks = 0;
> @@ -1671,9 +1672,16 @@ again:
> struct anon_vma_chain *vmac;
> struct vm_area_struct *vma;
>
> - spin_lock(&anon_vma->lock);
> + anon_vma = anon_vma_lock_root(anon_vma);
> + if (nested_anon_vma) {
> + spin_unlock(&nested_anon_vma->lock);
> + nested_anon_vma = NULL;
> + }
> list_for_each_entry(vmac, &anon_vma->head, same_anon_vma) {
> vma = vmac->vma;
> + nested_anon_vma = anon_vma_lock_nested(nested_anon_vma,
> + vma->anon_vma, anon_vma);
> +
> if (rmap_item->address < vma->vm_start ||
> rmap_item->address >= vma->vm_end)
> continue;
> @@ -1697,6 +1705,9 @@ again:
> if (!search_new_forks++)
> goto again;
> out:
> + if (nested_anon_vma)
> + spin_unlock(&nested_anon_vma->lock);
> +
> return ret;
> }
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index f90ea92..b447d5b 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -505,6 +505,7 @@ int vma_adjust(struct vm_area_struct *vma, unsigned long start,
> struct vm_area_struct *next = vma->vm_next;
> struct vm_area_struct *importer = NULL;
> struct address_space *mapping = NULL;
> + struct anon_vma *anon_vma = NULL;
> struct prio_tree_root *root = NULL;
> struct file *file = vma->vm_file;
> long adjust_next = 0;
> @@ -578,6 +579,11 @@ again: remove_next = 1 + (end > next->vm_end);
> }
> }
>
> + if (vma->anon_vma && (insert || importer || start != vma->vm_start)) {
> + anon_vma = vma->anon_vma;
> + spin_lock(&anon_vma->lock);
> + }
> +
> if (root) {
> flush_dcache_mmap_lock(mapping);
> vma_prio_tree_remove(vma, root);
> @@ -620,6 +626,9 @@ again: remove_next = 1 + (end > next->vm_end);
> if (mapping)
> spin_unlock(&mapping->i_mmap_lock);
>
> + if (anon_vma)
> + spin_unlock(&anon_vma->lock);
> +
> if (remove_next) {
> if (file) {
> fput(file);
> @@ -2556,8 +2565,9 @@ int mm_take_all_locks(struct mm_struct *mm)
> for (vma = mm->mmap; vma; vma = vma->vm_next) {
> if (signal_pending(current))
> goto out_unlock;
> + /* Lock the anon_vmas in the same order rmap_walk would */
> if (vma->anon_vma)
> - list_for_each_entry(avc, &vma->anon_vma_chain, same_vma)
> + list_for_each_entry_reverse(avc, &vma->anon_vma_chain, same_vma)
> vm_lock_anon_vma(mm, avc->anon_vma);
> }
>
> @@ -2620,7 +2630,7 @@ void mm_drop_all_locks(struct mm_struct *mm)
>
> for (vma = mm->mmap; vma; vma = vma->vm_next) {
> if (vma->anon_vma)
> - list_for_each_entry(avc, &vma->anon_vma_chain, same_vma)
> + list_for_each_entry_reverse(avc, &vma->anon_vma_chain, same_vma)
> vm_unlock_anon_vma(avc->anon_vma);
> if (vma->vm_file && vma->vm_file->f_mapping)
> vm_unlock_mapping(vma->vm_file->f_mapping);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 85f203e..2e65a75 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -236,6 +236,81 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
> return -ENOMEM;
> }
>
> +/*
> + * When walking an anon_vma_chain and locking each anon_vma encountered,
> + * this function is responsible for checking if the next VMA is the
> + * same as the root, locking it if not and released the previous lock
> + * if necessary.
> + *
> + * It is assumed the caller has locked the root anon_vma
> + */
> +struct anon_vma *anon_vma_lock_nested(struct anon_vma *prev,
> + struct anon_vma *next, struct anon_vma *root)
> +{
> + if (prev)
> + spin_unlock(&prev->lock);
> + if (next == root)
> + return NULL;
> + spin_lock_nested(&next->lock, SINGLE_DEPTH_NESTING);
> + return next;
> +}
> +
> +/*
> + * Given an anon_vma, find the root of the chain, lock it and return the
> + * root. This must be called with the rcu_read_lock held
> + */
> +struct anon_vma *anon_vma_lock_root(struct anon_vma *anon_vma)
> +{
> + struct anon_vma *root_anon_vma;
> + struct anon_vma_chain *avc, *root_avc;
> + struct vm_area_struct *vma;
> +
> + /* Lock the same_anon_vma list and make sure we are on a chain */
> + spin_lock(&anon_vma->lock);
> + if (list_empty(&anon_vma->head)) {
> + spin_unlock(&anon_vma->lock);
> + return NULL;
> + }
> +
> + /*
> + * Get the root anon_vma on the list by depending on the ordering
> + * of the same_vma list setup by __page_set_anon_rmap. Basically
> + * we are doing
> + *
> + * local anon_vma -> local vma -> root vma -> root anon_vma
> + */
> + avc = list_first_entry(&anon_vma->head, struct anon_vma_chain, same_anon_vma);
> + vma = avc->vma;
> + root_avc = list_entry(vma->anon_vma_chain.prev, struct anon_vma_chain, same_vma);
> + root_anon_vma = root_avc->anon_vma;
> +
> + /* Get the lock of the root anon_vma */
> + if (anon_vma != root_anon_vma) {
> + VM_BUG_ON(!rcu_read_lock_held());
> + spin_unlock(&anon_vma->lock);
> + spin_lock(&root_anon_vma->lock);
> + }
I'm sorry but I don't think I understand this. Could you help me ?
IIUC, anon_vma_chain is linked as 2D-mesh
anon_vma1 anon_vma2 anon_vma3
| | |
vma1 ----- 1 -------- 2 --------- 3 -----
| | |
vma2 ----- 4 -------- 5 ---------- 6 -----
| | |
vma3 ----- 7 -------- 8 ---------- 9 -----
Here,
* vertical link is anon_vma->head, avc->same_anon_vma link.
* horizontal link is vma->anon_vma_chain, avc->same_vma link.
* 1-9 are avcs.
When scanning pages, we may see a page whose anon_vma is anon_vma1
or anon_vma2 or anon_vma3.
When we see anon_vma3 in page->mapping, we lock anon_vma1 and chase
avc1->avc4->avc7. Then, start from vma1. Next, we visit vma2, we lock anon_vma2.
At the last, we visit vma3 and lock anon_vma3.....And all are done under
anon_vma1->lock. Right ?
Hmm, one concern is
anon_vma3 -> avc3 -> vma1 -> avc1 -> anon_vma1 chasing.
What will prevent vma1 disappear right after releasling anon_vma3->lock ?
ex)
a1) At we chase, anon_vma3 -> avc3 -> vma1 -> anon_vma1, link was following.
anon_vma1 anon_vma2 anon_vma3
| | |
vma1 ----- 1 -------- 2 --------- 3 -----
| | |
vma2 ----- 4 -------- 5 ---------- 6 -----
| | |
vma3 ----- 7 -------- 8 ---------- 9 -----
We hold lock on anon_vma3.
a2) After releasing anon_vma3 lock. vma1 can be unlinked.
anon_vma1 anon_vma2 anon_vma3
| | |
vma1 removed.
| | |
vma2 ----- 4 -------- 5 ---------- 6 -----
| | |
vma3 ----- 7 -------- 8 ---------- 9 -----
But we know anon_vma1->head is not empty, and it's accessable.
Then, no problem for our purpose. Right ?
b1) Another thinking.
At we chase, anon_vma3 -> avc3 -> vma1 -> anon_vma1, link was following.
anon_vma1 anon_vma2 anon_vma3
| | |
vma1 ----- 1 -------- 2 --------- 3 -----
| | |
vma2 ----- 4 -------- 5 ---------- 6 -----
| | |
vma3 ----- 7 -------- 8 ---------- 9 -----
We hold lock on anon_vma3. So,
anon_vma1 anon_vma2 anon_vma3
| | |
vma1 ----removed -----removed ------ 3 -----
| | |
vma2 ----- 4 -------- 5 ---------- 6 -----
| | |
vma3 ----- 7 -------- 8 ---------- 9 -----
we may see half-broken link while we take anon_vma3->lock. In this case,
anon_vma1 can be caugt.
Don't we need this ?
void unlink_anon_vmas(struct vm_area_struct *vma)
{
struct anon_vma_chain *avc, *next;
/* Unlink each anon_vma chained to the VMA. */
- list_for_each_entry_safe_reverse(avc, next, &vma->anon_vma_chain, same_vma) {
+ list_for_each_entry_safe_reverse(avc, next, &vma->anon_vma_chain, same_vma) {
anon_vma_unlink(avc);
list_del(&avc->same_vma);
anon_vma_chain_free(avc);
}
}
head avc should be removed last... Hmm ? I'm sorry if all are
in correct order already.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-06 23:20 ` [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack Mel Gorman
@ 2010-05-07 1:40 ` Linus Torvalds
2010-05-07 1:57 ` KAMEZAWA Hiroyuki
2010-05-07 9:16 ` Mel Gorman
0 siblings, 2 replies; 34+ messages in thread
From: Linus Torvalds @ 2010-05-07 1:40 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, Linux-MM, LKML, Minchan Kim, KAMEZAWA Hiroyuki,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Fri, 7 May 2010, Mel Gorman wrote:
>
> Page migration requires rmap to be able to find all migration ptes
> created by migration. If the second rmap_walk clearing migration PTEs
> misses an entry, it is left dangling causing a BUG_ON to trigger during
> fault. For example;
So I still absolutely detest this patch.
Why didn't the other - much simpler - patch work? The one Rik pointed to:
http://lkml.org/lkml/2010/4/30/198
and didn't do that _disgusting_ temporary anon_vma?
Alternatively, why don't we just take the anon_vma lock over this region,
so that rmap can't _walk_ the damn thing?
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-07 1:40 ` Linus Torvalds
@ 2010-05-07 1:57 ` KAMEZAWA Hiroyuki
2010-05-07 2:12 ` Linus Torvalds
2010-05-07 9:16 ` Mel Gorman
1 sibling, 1 reply; 34+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-05-07 1:57 UTC (permalink / raw)
To: Linus Torvalds
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Thu, 6 May 2010 18:40:39 -0700 (PDT)
Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
>
> On Fri, 7 May 2010, Mel Gorman wrote:
> >
> > Page migration requires rmap to be able to find all migration ptes
> > created by migration. If the second rmap_walk clearing migration PTEs
> > misses an entry, it is left dangling causing a BUG_ON to trigger during
> > fault. For example;
>
> So I still absolutely detest this patch.
>
> Why didn't the other - much simpler - patch work? The one Rik pointed to:
>
> http://lkml.org/lkml/2010/4/30/198
>
> and didn't do that _disgusting_ temporary anon_vma?
>
I vote for simple one rather than temporal anon_vma. IIUC, this patch is selected
for not to leak exec's problem out to mm/ by magical check.
> Alternatively, why don't we just take the anon_vma lock over this region,
> so that rmap can't _walk_ the damn thing?
>
IIUC, move_page_tables() may call "page table allocation" and it cannot be
done under spinlock.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-07 1:57 ` KAMEZAWA Hiroyuki
@ 2010-05-07 2:12 ` Linus Torvalds
2010-05-07 4:19 ` KAMEZAWA Hiroyuki
2010-05-09 19:21 ` Mel Gorman
0 siblings, 2 replies; 34+ messages in thread
From: Linus Torvalds @ 2010-05-07 2:12 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Fri, 7 May 2010, KAMEZAWA Hiroyuki wrote:
>
> IIUC, move_page_tables() may call "page table allocation" and it cannot be
> done under spinlock.
Bah. It only does a "alloc_new_pmd()", and we could easily move that out
of the loop and pre-allocate the pmd's.
If that's the only reason, then it's a really weak one, methinks.
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-07 2:12 ` Linus Torvalds
@ 2010-05-07 4:19 ` KAMEZAWA Hiroyuki
2010-05-07 14:18 ` Linus Torvalds
2010-05-09 19:21 ` Mel Gorman
1 sibling, 1 reply; 34+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-05-07 4:19 UTC (permalink / raw)
To: Linus Torvalds
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Thu, 6 May 2010 19:12:59 -0700 (PDT)
Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
>
> On Fri, 7 May 2010, KAMEZAWA Hiroyuki wrote:
> >
> > IIUC, move_page_tables() may call "page table allocation" and it cannot be
> > done under spinlock.
>
> Bah. It only does a "alloc_new_pmd()", and we could easily move that out
> of the loop and pre-allocate the pmd's.
>
> If that's the only reason, then it's a really weak one, methinks.
>
Hmm, is this too slow ? This is the simplest one I have.
==
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
exec()'s shift_arg_pages calls adjust_vma and move_page_table() directly.
During this, rmap information (i.e. page <-> pte <-> address <-> vma)
information is inconsistent. This causes a bug in rmap_walk() which
rmap_walk cannot find valid ptes it must visit.
Considering the race, move_vma() does valid things. So, making use of
move_vma() instead of bare move_page_tables() is a choice.
Pros.
- all races are handled under /mm. very simple.
Cons.
- This may makes exec() slow a bit.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
fs/exec.c | 64 +++++++++++++++--------------------------------------
include/linux/mm.h | 6 ++--
mm/mremap.c | 4 +--
3 files changed, 24 insertions(+), 50 deletions(-)
Index: linux-2.6.34-rc5-mm1/fs/exec.c
===================================================================
--- linux-2.6.34-rc5-mm1.orig/fs/exec.c
+++ linux-2.6.34-rc5-mm1/fs/exec.c
@@ -486,14 +486,7 @@ EXPORT_SYMBOL(copy_strings_kernel);
/*
* During bprm_mm_init(), we create a temporary stack at STACK_TOP_MAX. Once
* the binfmt code determines where the new stack should reside, we shift it to
- * its final location. The process proceeds as follows:
- *
- * 1) Use shift to calculate the new vma endpoints.
- * 2) Extend vma to cover both the old and new ranges. This ensures the
- * arguments passed to subsequent functions are consistent.
- * 3) Move vma's page tables to the new range.
- * 4) Free up any cleared pgd range.
- * 5) Shrink the vma to cover only the new range.
+ * its final location.
*/
static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift)
{
@@ -503,7 +496,7 @@ static int shift_arg_pages(struct vm_are
unsigned long length = old_end - old_start;
unsigned long new_start = old_start - shift;
unsigned long new_end = old_end - shift;
- struct mmu_gather *tlb;
+ unsigned long ret;
BUG_ON(new_start > new_end);
@@ -514,45 +507,23 @@ static int shift_arg_pages(struct vm_are
if (vma != find_vma(mm, new_start))
return -EFAULT;
- /*
- * cover the whole range: [new_start, old_end)
- */
- if (vma_adjust(vma, new_start, old_end, vma->vm_pgoff, NULL))
- return -ENOMEM;
-
- /*
- * move the page tables downwards, on failure we rely on
- * process cleanup to remove whatever mess we made.
- */
- if (length != move_page_tables(vma, old_start,
- vma, new_start, length))
- return -ENOMEM;
+ if (new_end > old_start) { /* overlap */
+ unsigned long part_len = new_end - old_start;
+ ret = move_vma(vma, old_start, part_len, part_len, new_start);
- lru_add_drain();
- tlb = tlb_gather_mmu(mm, 0);
- if (new_end > old_start) {
- /*
- * when the old and new regions overlap clear from new_end.
- */
- free_pgd_range(tlb, new_end, old_end, new_end,
- vma->vm_next ? vma->vm_next->vm_start : 0);
+ if (ret != new_start)
+ return -ENOMEM;
+ /* old_vma is splitted.. */
+ vma = find_vma(mm, old_start + part_len);
+ ret = move_vma(vma, old_start + part_len, length - part_len,
+ length - part_len, new_start + part_len);
+ if (ret != new_start + part_len)
+ return -ENOMEM;
} else {
- /*
- * otherwise, clean from old_start; this is done to not touch
- * the address space in [new_end, old_start) some architectures
- * have constraints on va-space that make this illegal (IA64) -
- * for the others its just a little faster.
- */
- free_pgd_range(tlb, old_start, old_end, new_end,
- vma->vm_next ? vma->vm_next->vm_start : 0);
+ ret = move_vma(vma, old_start, length, length, new_start);
+ if (ret != new_start)
+ return -ENOMEM;
}
- tlb_finish_mmu(tlb, new_end, old_end);
-
- /*
- * Shrink the vma to just the new range. Always succeeds.
- */
- vma_adjust(vma, new_start, new_end, vma->vm_pgoff, NULL);
-
return 0;
}
@@ -625,9 +596,12 @@ int setup_arg_pages(struct linux_binprm
/* Move stack pages down in memory. */
if (stack_shift) {
+ unsigned long new_start = vma->vm_start - stack_shift;
ret = shift_arg_pages(vma, stack_shift);
if (ret)
goto out_unlock;
+ vma = find_vma(mm, new_start);
+ bprm->vma = vma;
}
stack_expand = 131072UL; /* randomly 32*4k (or 2*64k) pages */
Index: linux-2.6.34-rc5-mm1/include/linux/mm.h
===================================================================
--- linux-2.6.34-rc5-mm1.orig/include/linux/mm.h
+++ linux-2.6.34-rc5-mm1/include/linux/mm.h
@@ -856,9 +856,9 @@ int set_page_dirty_lock(struct page *pag
int set_page_dirty_notag(struct page *page);
int clear_page_dirty_for_io(struct page *page);
-extern unsigned long move_page_tables(struct vm_area_struct *vma,
- unsigned long old_addr, struct vm_area_struct *new_vma,
- unsigned long new_addr, unsigned long len);
+extern unsigned long move_vma(struct vm_area_struct *vma,
+ unsigned long old_addr, unsigned long old_length,
+ unsigned long new_length, unsigned long new_addr);
extern unsigned long do_mremap(unsigned long addr,
unsigned long old_len, unsigned long new_len,
unsigned long flags, unsigned long new_addr);
Index: linux-2.6.34-rc5-mm1/mm/mremap.c
===================================================================
--- linux-2.6.34-rc5-mm1.orig/mm/mremap.c
+++ linux-2.6.34-rc5-mm1/mm/mremap.c
@@ -128,7 +128,7 @@ static void move_ptes(struct vm_area_str
#define LATENCY_LIMIT (64 * PAGE_SIZE)
-unsigned long move_page_tables(struct vm_area_struct *vma,
+static unsigned long move_page_tables(struct vm_area_struct *vma,
unsigned long old_addr, struct vm_area_struct *new_vma,
unsigned long new_addr, unsigned long len)
{
@@ -162,7 +162,7 @@ unsigned long move_page_tables(struct vm
return len + old_addr - old_end; /* how much done */
}
-static unsigned long move_vma(struct vm_area_struct *vma,
+unsigned long move_vma(struct vm_area_struct *vma,
unsigned long old_addr, unsigned long old_len,
unsigned long new_len, unsigned long new_addr)
{
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 0/2] Fix migration races in rmap_walk() V7
2010-05-06 23:20 [PATCH 0/2] Fix migration races in rmap_walk() V7 Mel Gorman
2010-05-06 23:20 ` [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information Mel Gorman
2010-05-06 23:20 ` [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack Mel Gorman
@ 2010-05-07 8:13 ` KAMEZAWA Hiroyuki
2 siblings, 0 replies; 34+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-05-07 8:13 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, Linux-MM, LKML, Minchan Kim, Christoph Lameter,
Andrea Arcangeli, Rik van Riel, Linus Torvalds, Peter Zijlstra
On Fri, 7 May 2010 00:20:51 +0100
Mel Gorman <mel@csn.ul.ie> wrote:
> No major change since V6, just looks nicer.
>
seems to work well in my test.
I think there is no unknown pitfalls. I'm glad if you double check my
concern for patch [1/2].
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-07 1:40 ` Linus Torvalds
2010-05-07 1:57 ` KAMEZAWA Hiroyuki
@ 2010-05-07 9:16 ` Mel Gorman
1 sibling, 0 replies; 34+ messages in thread
From: Mel Gorman @ 2010-05-07 9:16 UTC (permalink / raw)
To: Linus Torvalds
Cc: Andrew Morton, Linux-MM, LKML, Minchan Kim, KAMEZAWA Hiroyuki,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Thu, May 06, 2010 at 06:40:39PM -0700, Linus Torvalds wrote:
> On Fri, 7 May 2010, Mel Gorman wrote:
> >
> > Page migration requires rmap to be able to find all migration ptes
> > created by migration. If the second rmap_walk clearing migration PTEs
> > misses an entry, it is left dangling causing a BUG_ON to trigger during
> > fault. For example;
>
> So I still absolutely detest this patch.
>
> Why didn't the other - much simpler - patch work? The one Rik pointed to:
>
> http://lkml.org/lkml/2010/4/30/198
>
Oh, it works, but it depends on a magic check is_vma_temporary_stack().
Kamezawa had a variant that did not depend on magic but it increased the
size of vm_area_struct. That magic check was just something that would
be easy to break and not spot hence the temporary VMA instead.
> and didn't do that _disgusting_ temporary anon_vma?
>
> Alternatively, why don't we just take the anon_vma lock over this region,
> so that rmap can't _walk_ the damn thing?
>
Because move_page_tables() calls into the page allocator. I'll create a
version that allocates the PMDs in advance and see what it looks like.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-07 4:19 ` KAMEZAWA Hiroyuki
@ 2010-05-07 14:18 ` Linus Torvalds
0 siblings, 0 replies; 34+ messages in thread
From: Linus Torvalds @ 2010-05-07 14:18 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Fri, 7 May 2010, KAMEZAWA Hiroyuki wrote:
>
> Hmm, is this too slow ? This is the simplest one I have.
Well, it may be as slow (or slower) than Andrea's, but at least it is
_clean_ and actually removes code. So if we can't do it better, I'd
certainly prefer this to the horribly hacky one.
That said, I still think we could easily just split up
"move_page_tables()" into two functions - one that just does the page
table allocation, and one that actually moves the entries.
In fact, I think that would even clean up the error case for move_vma()
too - the page table entry movement itself could never fail, so you never
end up with that insane "move back" case.
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information
2010-05-07 0:56 ` KAMEZAWA Hiroyuki
@ 2010-05-07 16:26 ` Mel Gorman
0 siblings, 0 replies; 34+ messages in thread
From: Mel Gorman @ 2010-05-07 16:26 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Andrew Morton, Linux-MM, LKML, Minchan Kim, Christoph Lameter,
Andrea Arcangeli, Rik van Riel, Linus Torvalds, Peter Zijlstra
On Fri, May 07, 2010 at 09:56:54AM +0900, KAMEZAWA Hiroyuki wrote:
> On Fri, 7 May 2010 00:20:52 +0100
> Mel Gorman <mel@csn.ul.ie> wrote:
>
> > vma_adjust() is updating anon VMA information without locks being taken.
> > In contrast, file-backed mappings use the i_mmap_lock and this lack of
> > locking can result in races with users of rmap_walk such as page migration.
> > vma_address() can return -EFAULT for an address that will soon be valid.
> > For migration, this potentially leaves a dangling migration PTE behind
> > which can later cause a BUG_ON to trigger when the page is faulted in.
> >
> > <SNIP>
>
> I'm sorry but I don't think I understand this. Could you help me ?
>
Hopefully.
> IIUC, anon_vma_chain is linked as 2D-mesh
>
> anon_vma1 anon_vma2 anon_vma3
> | | |
> vma1 ----- 1 -------- 2 --------- 3 -----
> | | |
> vma2 ----- 4 -------- 5 ---------- 6 -----
> | | |
> vma3 ----- 7 -------- 8 ---------- 9 -----
>
>
> Here,
> * vertical link is anon_vma->head, avc->same_anon_vma link.
> * horizontal link is vma->anon_vma_chain, avc->same_vma link.
> * 1-9 are avcs.
>
I don't think this is quite right for how the "root" anon_vma is
discovered. The ordering of same_vma is such that the prev pointer
points to the root anon_vma as described in __page_set_anon_rmap() but
even so...
> When scanning pages, we may see a page whose anon_vma is anon_vma1
> or anon_vma2 or anon_vma3.
>
When we are walking the list for the anon_vma, we also hold the page
lock and what we're really interested in are ptes mapping that page.
> When we see anon_vma3 in page->mapping, we lock anon_vma1 and chase
> avc1->avc4->avc7. Then, start from vma1. Next, we visit vma2, we lock anon_vma2.
> At the last, we visit vma3 and lock anon_vma3.....And all are done under
> anon_vma1->lock. Right ?
>
assuming it's the root lock, sure.
> Hmm, one concern is
> anon_vma3 -> avc3 -> vma1 -> avc1 -> anon_vma1 chasing.
>
> What will prevent vma1 disappear right after releasling anon_vma3->lock ?
>
What does it matter if it disappeared? If it did, it was because it was torn
down, the PTEs are also gone and a user of rmap_walk should have stopped
caring. Right?
> ex)
> a1) At we chase, anon_vma3 -> avc3 -> vma1 -> anon_vma1, link was following.
>
> anon_vma1 anon_vma2 anon_vma3
> | | |
> vma1 ----- 1 -------- 2 --------- 3 -----
> | | |
> vma2 ----- 4 -------- 5 ---------- 6 -----
> | | |
> vma3 ----- 7 -------- 8 ---------- 9 -----
>
> We hold lock on anon_vma3.
>
> a2) After releasing anon_vma3 lock. vma1 can be unlinked.
>
> anon_vma1 anon_vma2 anon_vma3
> | | |
> vma1 removed.
> | | |
> vma2 ----- 4 -------- 5 ---------- 6 -----
> | | |
> vma3 ----- 7 -------- 8 ---------- 9 -----
>
> But we know anon_vma1->head is not empty, and it's accessable.
> Then, no problem for our purpose. Right ?
>
As the PTEs are also gone, I'm not seeing the problem.
> b1) Another thinking.
>
> At we chase, anon_vma3 -> avc3 -> vma1 -> anon_vma1, link was following.
>
> anon_vma1 anon_vma2 anon_vma3
> | | |
> vma1 ----- 1 -------- 2 --------- 3 -----
> | | |
> vma2 ----- 4 -------- 5 ---------- 6 -----
> | | |
> vma3 ----- 7 -------- 8 ---------- 9 -----
>
> We hold lock on anon_vma3. So,
>
> anon_vma1 anon_vma2 anon_vma3
> | | |
> vma1 ----removed -----removed ------ 3 -----
> | | |
> vma2 ----- 4 -------- 5 ---------- 6 -----
> | | |
> vma3 ----- 7 -------- 8 ---------- 9 -----
>
> we may see half-broken link while we take anon_vma3->lock. In this case,
> anon_vma1 can be caugt.
>
> Don't we need this ?
>
>
> void unlink_anon_vmas(struct vm_area_struct *vma)
> {
> struct anon_vma_chain *avc, *next;
>
> /* Unlink each anon_vma chained to the VMA. */
> - list_for_each_entry_safe_reverse(avc, next, &vma->anon_vma_chain, same_vma) {
This was meant to be list_for_each_entry_safe(....)
> + list_for_each_entry_safe_reverse(avc, next, &vma->anon_vma_chain, same_vma) {
> anon_vma_unlink(avc);
> list_del(&avc->same_vma);
> anon_vma_chain_free(avc);
> }
> }
>
> head avc should be removed last... Hmm ? I'm sorry if all are
> in correct order already.
>
I think the ordering is ok. The rmap_walk may find a situation where the
anon_vmas are being cleaned up but again as the page tables are going
away at this point, the contents of the PTEs are no longer important.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information
2010-05-06 23:20 ` [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information Mel Gorman
2010-05-07 0:56 ` KAMEZAWA Hiroyuki
@ 2010-05-08 15:39 ` Andrea Arcangeli
2010-05-08 17:02 ` Linus Torvalds
2010-05-09 19:23 ` Mel Gorman
1 sibling, 2 replies; 34+ messages in thread
From: Andrea Arcangeli @ 2010-05-08 15:39 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, Linux-MM, LKML, Minchan Kim, KAMEZAWA Hiroyuki,
Christoph Lameter, Rik van Riel, Linus Torvalds, Peter Zijlstra
On Fri, May 07, 2010 at 12:20:52AM +0100, Mel Gorman wrote:
> @@ -1655,6 +1655,7 @@ int rmap_walk_ksm(struct page *page, int (*rmap_one)(struct page *,
> {
> struct stable_node *stable_node;
> struct hlist_node *hlist;
> + struct anon_vma *nested_anon_vma = NULL;
> struct rmap_item *rmap_item;
> int ret = SWAP_AGAIN;
> int search_new_forks = 0;
> @@ -1671,9 +1672,16 @@ again:
> struct anon_vma_chain *vmac;
> struct vm_area_struct *vma;
>
> - spin_lock(&anon_vma->lock);
> + anon_vma = anon_vma_lock_root(anon_vma);
> + if (nested_anon_vma) {
> + spin_unlock(&nested_anon_vma->lock);
> + nested_anon_vma = NULL;
> + }
> list_for_each_entry(vmac, &anon_vma->head, same_anon_vma) {
> vma = vmac->vma;
> + nested_anon_vma = anon_vma_lock_nested(nested_anon_vma,
> + vma->anon_vma, anon_vma);
> +
> if (rmap_item->address < vma->vm_start ||
> rmap_item->address >= vma->vm_end)
> continue;
> @@ -1368,19 +1444,26 @@ static int rmap_walk_anon(struct page *page, int (*rmap_one)(struct page *,
> * are holding mmap_sem. Users without mmap_sem are required to
> * take a reference count to prevent the anon_vma disappearing
> */
> - anon_vma = page_anon_vma(page);
> + anon_vma = page_anon_vma_lock_root(page);
> if (!anon_vma)
> return ret;
> - spin_lock(&anon_vma->lock);
> list_for_each_entry(avc, &anon_vma->head, same_anon_vma) {
> struct vm_area_struct *vma = avc->vma;
> - unsigned long address = vma_address(page, vma);
> - if (address == -EFAULT)
> - continue;
> - ret = rmap_one(page, vma, address, arg);
> + unsigned long address;
> +
> + nested_anon_vma = anon_vma_lock_nested(nested_anon_vma,
> + vma->anon_vma, anon_vma);
> + address = vma_address(page, vma);
> + if (address != -EFAULT)
> + ret = rmap_one(page, vma, address, arg);
> +
> if (ret != SWAP_AGAIN)
> break;
> }
> +
> + if (nested_anon_vma)
> + spin_unlock(&nested_anon_vma->lock);
> +
> spin_unlock(&anon_vma->lock);
> return ret;
> }
I already told Mel by PM. This degrades the new-anon_vma code to an
even _slower_ mode than the old anon-vma code in 2.6.32 (the same in
math complexity terms but slower in practice) for migrate. Furthermore
page_referenced() may now return true even if there are young ptes
that simply get lost in the rmap walk.
The new anon-vma code is mostly relevant for migrate and memory
compaction and transparent hugepage support where it gets invoked even
if there's plenty of free memory and no I/O load at all. So whatever
you save during swap, you'll lose while transparent hugepage support
allocate the pages. So the above fix renders the whole effort
pointless as far as I'm concerned.
I think Rik's patch is the only sensible solution that won't
invalidate the whole effort for transparent hugepage.
About how to adapt split_huge_page to the root anon_vma I didn't even
think about it yet. All I can tell you right now is that
wait_split_huge_page can be changed to wait on the pmd_trans_splitting
(or alternatively the pmd_trans_huge bit) bit to go away in a
cpu_relax() barrier() loop. But the page->mapping/anon_vma->lock is
also used to serialize against parallel split_huge_page but I guess
taking the root anon_vma lock in split_huge_page() should work
fine. Just I'm not going to do that except maybe in a for-mainline
branch, but I'll keep master branch with the old-anon-vma 2.6.32 code
and the anon_vma_branch with Rik's fix that allows to take advantage
of the new anon-vma code (so it's not purely gratuitous complexity
added for nothing) also in migrate.c from memory compaction (that runs
24/7 on all my systems and it's much more frequent than the swap rmap
walks that in fact never ever happens here), and in the rmap walks in
split_huge_page too (which are not so frequent especially after
Johannes implemented native mprotect on hugepages but it's obviously
still more frequent than swap).
I'm simply not going to support the degradation to the root anon_vma
complexity in aa.git, except for strict merging requirements but I'll
keep backing it out in aa.git or I'd rather stick to old-anon-vma
code which at least is much simpler and saves memory too (as there are
many fewer anon-vma and no avc, and less useless locks).
What I instead already mentioned once was to add a _shared_ lock so
you share the spinlock across the whole forest but you keep walking
the right page->anon_vma->same_anon_vma! The moment you walk the
page->anon_vma->root_anon_vma->same_anon_vma you lost my support as it
makes the whole effort pointless compared to 2.6.32 as far as 99% of my
workloads are concerned.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information
2010-05-08 15:39 ` Andrea Arcangeli
@ 2010-05-08 17:02 ` Linus Torvalds
2010-05-08 18:04 ` Andrea Arcangeli
2010-05-09 19:23 ` Mel Gorman
1 sibling, 1 reply; 34+ messages in thread
From: Linus Torvalds @ 2010-05-08 17:02 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
KAMEZAWA Hiroyuki, Christoph Lameter, Rik van Riel,
Peter Zijlstra
On Sat, 8 May 2010, Andrea Arcangeli wrote:
>
> I'm simply not going to support the degradation to the root anon_vma
> complexity in aa.git, except for strict merging requirements [ ..]
Goodie. That makes things easier for me - there's never going to be any
issue of whether I need to even _think_ about merging the piece of crap.
In other words - if people want anything like that merged, you had better
work on cleaning up the locking. Because I absolutely WILL NOT apply any
of the f*ckign broken locking patches that I've seen, when I've personally
told people how to fix the thing.
In other words, it's all _your_ problem.
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information
2010-05-08 17:02 ` Linus Torvalds
@ 2010-05-08 18:04 ` Andrea Arcangeli
2010-05-08 19:51 ` Linus Torvalds
0 siblings, 1 reply; 34+ messages in thread
From: Andrea Arcangeli @ 2010-05-08 18:04 UTC (permalink / raw)
To: Linus Torvalds
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
KAMEZAWA Hiroyuki, Christoph Lameter, Rik van Riel,
Peter Zijlstra
On Sat, May 08, 2010 at 10:02:50AM -0700, Linus Torvalds wrote:
>
>
> On Sat, 8 May 2010, Andrea Arcangeli wrote:
> >
> > I'm simply not going to support the degradation to the root anon_vma
> > complexity in aa.git, except for strict merging requirements [ ..]
>
> Goodie. That makes things easier for me - there's never going to be any
> issue of whether I need to even _think_ about merging the piece of crap.
>
> In other words - if people want anything like that merged, you had better
> work on cleaning up the locking. Because I absolutely WILL NOT apply any
> of the f*ckign broken locking patches that I've seen, when I've personally
> told people how to fix the thing.
There is no broken (as in kernel crashing) locking in my THP
patches. It is more stable than mainline as far as migration is
concerned.
And the patch I think you're calling "broken" to fix the anon-vma
locking wasn't written by me (I only fixed the exec vs migrate race in
the way _I_ prefer and I still prefer, which is another bug not
related to the new anon-vma changes).
If the other patch you call broken:
http://git.kernel.org/?p=linux/kernel/git/andrea/aa.git;a=patch;h=f42cfe59ed4ea474ea22aeec033aad408e1fb9d2
go figure how broken it is to do all this anon-vma changes, with all
complexity and risk involved with the only benefit of running the rmap
walks faster, and in the end the very function called rmap_walk() runs
even slower than 2.6.32. _That_ is broken in my view, and I'm not
saying I like the patch in the above link (I suggested a shared lock
from the start if you check) but it surely is less broken than what
you're discussing here about the root anon-vma.
> In other words, it's all _your_ problem.
And I already said in the very previous email I sent in this thread, I
will also maintain a for-mainline tree trying to support this
root-anon-vma thing in case you merge it by mistake, just to avoid
this new anon-vma locking changes to be involved in the merging
issues of THP.
How you mix the merging of THP with how to solve this migrate crash
that only affects mainline new anon-vma code without THP, I've no
idea, especially considering I mentioned I will support whatever
anon-vma locking that will go in mainline in a separate branch
(clearly it won't be the one I recommend to use for the very reasons
explained above but I'll provide it for merging).
Also note, if you refuse to merge THP it's not my problem.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information
2010-05-08 18:04 ` Andrea Arcangeli
@ 2010-05-08 19:51 ` Linus Torvalds
0 siblings, 0 replies; 34+ messages in thread
From: Linus Torvalds @ 2010-05-08 19:51 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
KAMEZAWA Hiroyuki, Christoph Lameter, Rik van Riel,
Peter Zijlstra
On Sat, 8 May 2010, Andrea Arcangeli wrote:
>
> There is no broken (as in kernel crashing) locking in my THP
> patches.
It's not about crashing. It's about being a totally unmaintainable mess,
with ad-hoc temporary allocations, and loops over an unknown number of
spinlocks.
That's _broken_. B. R. O. K. E. N.
And in all cases there are fixes that I've pointed out. If you can't see
that, then that's _your_ problem. If you (or others) want your code
merged, then it had better _fix_ the total disaster before merging. It's
that simple.
The "lock root" thing you complain should also be easily fixable, by
keeping the root lock a separate issue from walking the actual anon_vma
(ie walk the anon_vma, but lock the root). You still don't have to lock
the whole list.
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-07 2:12 ` Linus Torvalds
2010-05-07 4:19 ` KAMEZAWA Hiroyuki
@ 2010-05-09 19:21 ` Mel Gorman
2010-05-09 19:56 ` Linus Torvalds
2010-05-10 0:32 ` KAMEZAWA Hiroyuki
1 sibling, 2 replies; 34+ messages in thread
From: Mel Gorman @ 2010-05-09 19:21 UTC (permalink / raw)
To: Linus Torvalds
Cc: KAMEZAWA Hiroyuki, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Thu, May 06, 2010 at 07:12:59PM -0700, Linus Torvalds wrote:
>
>
> On Fri, 7 May 2010, KAMEZAWA Hiroyuki wrote:
> >
> > IIUC, move_page_tables() may call "page table allocation" and it cannot be
> > done under spinlock.
>
> Bah. It only does a "alloc_new_pmd()", and we could easily move that out
> of the loop and pre-allocate the pmd's.
>
> If that's the only reason, then it's a really weak one, methinks.
>
It turns out not to be easy to the preallocating of PUDs, PMDs and PTEs
move_page_tables() needs. To avoid overallocating, it has to follow the same
logic as move_page_tables duplicating some code in the process. The ugliest
aspect of all is passing those pre-allocated pages back into move_page_tables
where they need to be passed down to such functions as __pte_alloc. It turns
extremely messy.
I stopped working on it about half way through as it was already too ugly
to live and would have similar cost to Kamezawa's much more straight-forward
approach of using move_vma().
While using move_vma is straight-forward and solves the problem, it's
not as cheap as Andrea's solution. Andrea allocates a temporary VMA and
puts it on a list and very little else. It didn't show up any problems
in microbenchmarks. Calling move_vma does a lot more work particularly in
copy_vma and this slows down exec.
With Kamezawa's patch, kernbench was fine on wall time but in System Time,
it slowed by up 1.48% in comparison to Andrea's slowing up by 0.64%[1].
aim9 was slowed as well. Kamezawa's slowed by 2.77% where Andrea's reported
faster by 2.58%. While AIM9 is flaky and these figures are barely outside
the noise, calling move_vma() is obviously more expensive.
While my solution at http://lkml.org/lkml/2010/4/30/198 is cheapest as it
does not touch exec() at all, is_vma_temporary_stack() could be broken in
the future if any of the assumptions it makes change.
So what you have is an inverse relationship between magic and
performance. Mine has the most magic and is fastest. Kamezawa's has the
least magic but slowest and Andrea has the goldilocks factor. Which do
you prefer?
[1] One caveat of the performance tests was that a lot of debugging such
as lockdep was enabled. Disabling these would give different results
but it should still be the case that calling move_vma is more expensive
than calling kmem_cache_alloc.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information
2010-05-08 15:39 ` Andrea Arcangeli
2010-05-08 17:02 ` Linus Torvalds
@ 2010-05-09 19:23 ` Mel Gorman
1 sibling, 0 replies; 34+ messages in thread
From: Mel Gorman @ 2010-05-09 19:23 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Andrew Morton, Linux-MM, LKML, Minchan Kim, KAMEZAWA Hiroyuki,
Christoph Lameter, Rik van Riel, Linus Torvalds, Peter Zijlstra
On Sat, May 08, 2010 at 05:39:22PM +0200, Andrea Arcangeli wrote:
> On Fri, May 07, 2010 at 12:20:52AM +0100, Mel Gorman wrote:
> > @@ -1655,6 +1655,7 @@ int rmap_walk_ksm(struct page *page, int (*rmap_one)(struct page *,
> > {
> > struct stable_node *stable_node;
> > struct hlist_node *hlist;
> > + struct anon_vma *nested_anon_vma = NULL;
> > struct rmap_item *rmap_item;
> > int ret = SWAP_AGAIN;
> > int search_new_forks = 0;
> > @@ -1671,9 +1672,16 @@ again:
> > struct anon_vma_chain *vmac;
> > struct vm_area_struct *vma;
> >
> > - spin_lock(&anon_vma->lock);
> > + anon_vma = anon_vma_lock_root(anon_vma);
> > + if (nested_anon_vma) {
> > + spin_unlock(&nested_anon_vma->lock);
> > + nested_anon_vma = NULL;
> > + }
> > list_for_each_entry(vmac, &anon_vma->head, same_anon_vma) {
> > vma = vmac->vma;
> > + nested_anon_vma = anon_vma_lock_nested(nested_anon_vma,
> > + vma->anon_vma, anon_vma);
> > +
> > if (rmap_item->address < vma->vm_start ||
> > rmap_item->address >= vma->vm_end)
> > continue;
> > @@ -1368,19 +1444,26 @@ static int rmap_walk_anon(struct page *page, int (*rmap_one)(struct page *,
> > * are holding mmap_sem. Users without mmap_sem are required to
> > * take a reference count to prevent the anon_vma disappearing
> > */
> > - anon_vma = page_anon_vma(page);
> > + anon_vma = page_anon_vma_lock_root(page);
> > if (!anon_vma)
> > return ret;
> > - spin_lock(&anon_vma->lock);
> > list_for_each_entry(avc, &anon_vma->head, same_anon_vma) {
> > struct vm_area_struct *vma = avc->vma;
> > - unsigned long address = vma_address(page, vma);
> > - if (address == -EFAULT)
> > - continue;
> > - ret = rmap_one(page, vma, address, arg);
> > + unsigned long address;
> > +
> > + nested_anon_vma = anon_vma_lock_nested(nested_anon_vma,
> > + vma->anon_vma, anon_vma);
> > + address = vma_address(page, vma);
> > + if (address != -EFAULT)
> > + ret = rmap_one(page, vma, address, arg);
> > +
> > if (ret != SWAP_AGAIN)
> > break;
> > }
> > +
> > + if (nested_anon_vma)
> > + spin_unlock(&nested_anon_vma->lock);
> > +
> > spin_unlock(&anon_vma->lock);
> > return ret;
> > }
>
> I already told Mel by PM. This degrades the new-anon_vma code to an
> even _slower_ mode than the old anon-vma code in 2.6.32 (the same in
> math complexity terms but slower in practice) for migrate. Furthermore
> page_referenced() may now return true even if there are young ptes
> that simply get lost in the rmap walk.
>
Yes, it does degrade it for migration but I don't consider that to be
the critical path at the moment. If it's important that it's faster,
then the thing to do is to take the anon_vma reference counting stuff
from Peter's patches, remove the RCU locking and always lock the root.
> The new anon-vma code is mostly relevant for migrate and memory
> compaction and transparent hugepage support where it gets invoked even
> if there's plenty of free memory and no I/O load at all. So whatever
> you save during swap, you'll lose while transparent hugepage support
> allocate the pages. So the above fix renders the whole effort
> pointless as far as I'm concerned.
>
But this patch can be incrementally progressed to remove RCU and always
lock the root, right?
> I think Rik's patch is the only sensible solution that won't
> invalidate the whole effort for transparent hugepage.
>
But it slows the mmap/munmap/etc patch right? And I don't recall seeing
a way that the cost of it could be reduced.
> About how to adapt split_huge_page to the root anon_vma I didn't even
> think about it yet. All I can tell you right now is that
> wait_split_huge_page can be changed to wait on the pmd_trans_splitting
> (or alternatively the pmd_trans_huge bit) bit to go away in a
> cpu_relax() barrier() loop. But the page->mapping/anon_vma->lock is
> also used to serialize against parallel split_huge_page but I guess
> taking the root anon_vma lock in split_huge_page() should work
> fine.
I would expect so. It would be preferable from your perspective to
always take the root lock on all paths but without the reference
counting, it requires RCU in the main paths.
> Just I'm not going to do that except maybe in a for-mainline
> branch, but I'll keep master branch with the old-anon-vma 2.6.32 code
> and the anon_vma_branch with Rik's fix that allows to take advantage
> of the new anon-vma code (so it's not purely gratuitous complexity
> added for nothing) also in migrate.c from memory compaction (that runs
> 24/7 on all my systems and it's much more frequent than the swap rmap
> walks that in fact never ever happens here), and in the rmap walks in
> split_huge_page too (which are not so frequent especially after
> Johannes implemented native mprotect on hugepages but it's obviously
> still more frequent than swap).
>
> I'm simply not going to support the degradation to the root anon_vma
> complexity in aa.git, except for strict merging requirements but I'll
> keep backing it out in aa.git or I'd rather stick to old-anon-vma
> code which at least is much simpler and saves memory too (as there are
> many fewer anon-vma and no avc, and less useless locks).
>
> What I instead already mentioned once was to add a _shared_ lock so
> you share the spinlock across the whole forest but you keep walking
> the right page->anon_vma->same_anon_vma!
Is the root lock not such a lock as long as it was always used?
> The moment you walk the
> page->anon_vma->root_anon_vma->same_anon_vma you lost my support as it
> makes the whole effort pointless compared to 2.6.32 as far as 99% of my
> workloads are concerned.
>
Can we agree on this patch even? Even on it's own, I cannot reproduce
the migration-related problems racing with exec with this applied. Yes,
is adds some magic to exec, but the cost of exec remains the same.
==== CUT HERE ====
mm,migration: Avoid race between shift_arg_pages() and rmap_walk() during migration by not migrating temporary stacks
Page migration requires rmap to be able to find all ptes mapping a page
at all times, otherwise the migration entry can be instantiated, but it
is possible to leave one behind if the second rmap_walk fails to find
the page. If this page is later faulted, migration_entry_to_page() will
call BUG because the page is locked indicating the page was migrated by
the migration PTE not cleaned up. For example
[ 511.201534] kernel BUG at include/linux/swapops.h:105!
[ 511.201534] invalid opcode: 0000 [#1] PREEMPT SMP
[ 511.201534] last sysfs file: /sys/block/sde/size
[ 511.201534] CPU 0
[ 511.201534] Modules linked in: kvm_amd kvm dm_crypt loop i2c_piix4 serio_raw tpm_tis shpchp evdev tpm i2c_core pci_hotplug tpm_bios wmi processor button ext3 jbd mbcache dm_mirror dm_region_hash dm_log dm_snapshot dm_mod raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 multipath linear md_mod sg sr_mod cdrom sd_mod ata_generic ahci libahci libata ide_pci_generic ehci_hcd ide_core r8169 mii ohci_hcd scsi_mod floppy thermal fan thermal_sys
[ 511.888526]
[ 511.888526] Pid: 20431, comm: date Not tainted 2.6.34-rc4-mm1-fix-swapops #6 GA-MA790GP-UD4H/GA-MA790GP-UD4H
[ 511.888526] RIP: 0010:[<ffffffff811094ff>] [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
[ 512.173545] RSP: 0018:ffff880037b979d8 EFLAGS: 00010246
[ 512.198503] RAX: ffffea0000000000 RBX: ffffea0001a2ba10 RCX: 0000000000029830
[ 512.329617] RDX: 0000000001a2ba10 RSI: ffffffff818264b8 RDI: 000000000ef45c3e
[ 512.380001] RBP: ffff880037b97a08 R08: ffff880078003f00 R09: ffff880037b979e8
[ 512.380001] R10: ffffffff8114ddaa R11: 0000000000000246 R12: 0000000037304000
[ 512.380001] R13: ffff88007a9ed5c8 R14: f800000000077a2e R15: 000000000ef45c3e
[ 512.380001] FS: 00007f3d346866e0(0000) GS:ffff880002200000(0000) knlGS:0000000000000000
[ 512.380001] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 512.380001] CR2: 00007fff6abec9c1 CR3: 0000000037a15000 CR4: 00000000000006f0
[ 512.380001] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 513.004775] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 513.068415] Process date (pid: 20431, threadinfo ffff880037b96000, task ffff880078003f00)
[ 513.068415] Stack:
[ 513.068415] ffff880037b97b98 ffff880037b97a18 ffff880037b97be8 0000000000000c00
[ 513.228068] <0> ffff880037304f60 00007fff6abec9c1 ffff880037b97aa8 ffffffff810e951a
[ 513.228068] <0> ffff880037b97a88 0000000000000246 0000000000000000 ffffffff8130c5c2
[ 513.228068] Call Trace:
[ 513.228068] [<ffffffff810e951a>] handle_mm_fault+0x3f8/0x76a
[ 513.228068] [<ffffffff8130c5c2>] ? do_page_fault+0x26a/0x46e
[ 513.228068] [<ffffffff8130c7a2>] do_page_fault+0x44a/0x46e
[ 513.720755] [<ffffffff8130875d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
[ 513.789278] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
[ 513.851506] [<ffffffff813099b5>] page_fault+0x25/0x30
[ 513.851506] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
[ 513.851506] [<ffffffff811c1e27>] ? strnlen_user+0x3f/0x57
[ 513.851506] [<ffffffff8114de33>] load_elf_binary+0x152a/0x192b
[ 513.851506] [<ffffffff8111329b>] search_binary_handler+0x173/0x313
[ 513.851506] [<ffffffff8114c909>] ? load_elf_binary+0x0/0x192b
[ 513.851506] [<ffffffff81114896>] do_execve+0x219/0x30a
[ 513.851506] [<ffffffff8111887f>] ? getname+0x14d/0x1b3
[ 513.851506] [<ffffffff8100a5c6>] sys_execve+0x43/0x5e
[ 514.483501] [<ffffffff8100320a>] stub_execve+0x6a/0xc0
[ 514.548357] Code: 74 05 83 f8 1f 75 68 48 b8 ff ff ff ff ff ff ff 07 48 21 c2 48 b8 00 00 00 00 00 ea ff ff 48 6b d2 38 48 8d 1c 02 f6 03 01 75 04 <0f> 0b eb fe 8b 4b 08 48 8d 73 08 85 c9 74 35 8d 41 01 89 4d e0
[ 514.704292] RIP [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
[ 514.808221] RSP <ffff880037b979d8>
[ 514.906179] ---[ end trace 4f88495edc224d6b ]---
There is a race between shift_arg_pages and migration that triggers this bug.
A temporary stack is setup during exec and later moved. If migration moves
a page in the temporary stack and the VMA is then removed before migration
completes, the migration PTE may not be found leading to a BUG when the
stack is faulted.
Ideally, shift_arg_pages must run atomically with respect of rmap_walk
by holding the anon_vma lock but this is problematic as pages must be
allocated for page tables which cannot happen with a spinlock held. Instead,
this patch skips processes in exec by making an assumption that a VMA with
stack-flags set and a map_count == 1 is in exec that hasn't finalised the
temporary stack yet so don't migrate the pages. Memory hot-remove will try
again, sys_move_pages() wouldn't be operating during exec() time and memory
compaction will just continue to another page without concern.
[kamezawa.hiroyu@jp.fujitsu.com: Idea for having migration skip temporary stacks]
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
mm/rmap.c | 30 +++++++++++++++++++++++++++++-
1 files changed, 29 insertions(+), 1 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index f95b66d..3bb6c9e 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1143,6 +1143,20 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
return ret;
}
+static bool is_vma_temporary_stack(struct vm_area_struct *vma)
+{
+ int maybe_stack = vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP);
+
+ if (!maybe_stack)
+ return false;
+
+ /* If only the stack is mapped, assume exec is in progress */
+ if (vma->vm_mm->map_count == 1)
+ return true;
+
+ return false;
+}
+
/**
* try_to_unmap_anon - unmap or unlock anonymous page using the object-based
* rmap method
@@ -1171,7 +1185,21 @@ static int try_to_unmap_anon(struct page *page, enum ttu_flags flags)
list_for_each_entry(avc, &anon_vma->head, same_anon_vma) {
struct vm_area_struct *vma = avc->vma;
- unsigned long address = vma_address(page, vma);
+ unsigned long address;
+
+ /*
+ * During exec, a temporary VMA is setup and later moved.
+ * The VMA is moved under the anon_vma lock but not the
+ * page tables leading to a race where migration cannot
+ * find the migration ptes. Rather than increasing the
+ * locking requirements of exec(), migration skips
+ * temporary VMAs until after exec() completes.
+ */
+ if (PAGE_MIGRATION && (flags & TTU_MIGRATION) &&
+ is_vma_temporary_stack(vma))
+ continue;
+
+ address = vma_address(page, vma);
if (address == -EFAULT)
continue;
ret = try_to_unmap_one(page, vma, address, flags);
--
1.6.5
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-09 19:21 ` Mel Gorman
@ 2010-05-09 19:56 ` Linus Torvalds
2010-05-09 20:06 ` Linus Torvalds
` (3 more replies)
2010-05-10 0:32 ` KAMEZAWA Hiroyuki
1 sibling, 4 replies; 34+ messages in thread
From: Linus Torvalds @ 2010-05-09 19:56 UTC (permalink / raw)
To: Mel Gorman
Cc: KAMEZAWA Hiroyuki, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Sun, 9 May 2010, Mel Gorman wrote:
>
> It turns out not to be easy to the preallocating of PUDs, PMDs and PTEs
> move_page_tables() needs. To avoid overallocating, it has to follow the same
> logic as move_page_tables duplicating some code in the process. The ugliest
> aspect of all is passing those pre-allocated pages back into move_page_tables
> where they need to be passed down to such functions as __pte_alloc. It turns
> extremely messy.
Umm. What?
That's crazy talk. I'm not talking about preallocating stuff in order to
pass it in to move_page_tables(). I'm talking about just _creating_ the
dang page tables early - preallocating them IN THE PROCESS VM SPACE.
IOW, a patch like this (this is a pseudo-patch, totally untested, won't
compile, yadda yadda - you need to actually make the people who call
"move_page_tables()" call that prepare function first etc etc)
Yeah, if we care about holes in the page tables, we can certainly copy
more of the move_page_tables() logic, but it certainly doesn't matter for
execve(). This just makes sure that the destination page tables exist
first.
Linus
---
mm/mremap.c | 22 +++++++++++++++++++++-
1 files changed, 21 insertions(+), 1 deletions(-)
diff --git a/mm/mremap.c b/mm/mremap.c
index cde56ee..c14505c 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -128,6 +128,26 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
#define LATENCY_LIMIT (64 * PAGE_SIZE)
+/*
+ * Preallocate the page tables, so that we can do the actual move
+ * without any allocations, and thus no error handling etc.
+ */
+int prepare_move_page_tables(struct vm_area_struct *vma,
+ unsigned long old_addr, struct vm_area_struct *new_vma,
+ unsigned long new_addr, unsigned long len)
+{
+ unsigned long end_addr = new_addr + len;
+
+ while (new_addr < end_addr) {
+ pmd_t *new_pmd;
+ new_pmd = alloc_new_pmd(vma->vm_mm, new_addr);
+ if (!new_pmd)
+ return -ENOMEM;
+ new_addr = (new_addr + PMD_SIZE) & PMD_MASK;
+ }
+ return 0;
+}
+
unsigned long move_page_tables(struct vm_area_struct *vma,
unsigned long old_addr, struct vm_area_struct *new_vma,
unsigned long new_addr, unsigned long len)
@@ -147,7 +167,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
old_pmd = get_old_pmd(vma->vm_mm, old_addr);
if (!old_pmd)
continue;
- new_pmd = alloc_new_pmd(vma->vm_mm, new_addr);
+ new_pmd = get_old_pmd(vma->vm_mm, new_addr);
if (!new_pmd)
break;
next = (new_addr + PMD_SIZE) & PMD_MASK;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-09 19:56 ` Linus Torvalds
@ 2010-05-09 20:06 ` Linus Torvalds
2010-05-09 20:20 ` Linus Torvalds
2010-05-10 0:40 ` KAMEZAWA Hiroyuki
` (2 subsequent siblings)
3 siblings, 1 reply; 34+ messages in thread
From: Linus Torvalds @ 2010-05-09 20:06 UTC (permalink / raw)
To: Mel Gorman
Cc: KAMEZAWA Hiroyuki, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Sun, 9 May 2010, Linus Torvalds wrote:
>
> IOW, a patch like this (this is a pseudo-patch, totally untested, won't
> compile, yadda yadda - you need to actually make the people who call
> "move_page_tables()" call that prepare function first etc etc)
Btw, that pseudo-patch is also likely the least efficient way to do this,
since it re-walks the page directories for each pmd it creates.
Also - the thing is, we can easily choose to do this _just_ for the case
of shifting argument pages, which really is a special case (the generic
case of "mremap()" is a lot more complicated, and doesn't have the issue
with rmap because it's doing all the "new vma" setup etc).
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-09 20:06 ` Linus Torvalds
@ 2010-05-09 20:20 ` Linus Torvalds
0 siblings, 0 replies; 34+ messages in thread
From: Linus Torvalds @ 2010-05-09 20:20 UTC (permalink / raw)
To: Mel Gorman
Cc: KAMEZAWA Hiroyuki, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Sun, 9 May 2010, Linus Torvalds wrote:
>
> Also - the thing is, we can easily choose to do this _just_ for the case
> of shifting argument pages, which really is a special case (the generic
> case of "mremap()" is a lot more complicated, and doesn't have the issue
> with rmap because it's doing all the "new vma" setup etc).
Looking at the stack setup code, we have other things we probably might
want to look at.
For example, we do that "mprotect_fixup()" to fix up possible vm_flags
issues (notably whether execute bit is set or not), and that's absolutely
something that we probably should do at the same time as moving the stack,
so that we don't end up walking - and changing - the page tables _twice_.
So the stack setup really is a total special case, and it looks like we do
some rather stupid things there. Havign a specialized routine that does
just:
- if we're moving things, fill in the new page tables (we know they are
dense, so my "stupid" routine actually does the right thing despite
being pretty simplistic) before moving.
- if we're moving _or_ changing protections, do a
for_each_page()
move_andor_fix_protection()
which kind of looks like the current "move_page_tables()", except we
know it doesn't need new allocations.
Hmm?
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-09 19:21 ` Mel Gorman
2010-05-09 19:56 ` Linus Torvalds
@ 2010-05-10 0:32 ` KAMEZAWA Hiroyuki
1 sibling, 0 replies; 34+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-05-10 0:32 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Sun, 9 May 2010 20:21:45 +0100
Mel Gorman <mel@csn.ul.ie> wrote:
> On Thu, May 06, 2010 at 07:12:59PM -0700, Linus Torvalds wrote:
> >
> >
> > On Fri, 7 May 2010, KAMEZAWA Hiroyuki wrote:
> > >
> > > IIUC, move_page_tables() may call "page table allocation" and it cannot be
> > > done under spinlock.
> >
> > Bah. It only does a "alloc_new_pmd()", and we could easily move that out
> > of the loop and pre-allocate the pmd's.
> >
> > If that's the only reason, then it's a really weak one, methinks.
> >
>
> It turns out not to be easy to the preallocating of PUDs, PMDs and PTEs
> move_page_tables() needs. To avoid overallocating, it has to follow the same
> logic as move_page_tables duplicating some code in the process. The ugliest
> aspect of all is passing those pre-allocated pages back into move_page_tables
> where they need to be passed down to such functions as __pte_alloc. It turns
> extremely messy.
>
> I stopped working on it about half way through as it was already too ugly
> to live and would have similar cost to Kamezawa's much more straight-forward
> approach of using move_vma().
>
> While using move_vma is straight-forward and solves the problem, it's
> not as cheap as Andrea's solution. Andrea allocates a temporary VMA and
> puts it on a list and very little else. It didn't show up any problems
> in microbenchmarks. Calling move_vma does a lot more work particularly in
> copy_vma and this slows down exec.
>
> With Kamezawa's patch, kernbench was fine on wall time but in System Time,
> it slowed by up 1.48% in comparison to Andrea's slowing up by 0.64%[1].
>
> aim9 was slowed as well. Kamezawa's slowed by 2.77% where Andrea's reported
> faster by 2.58%. While AIM9 is flaky and these figures are barely outside
> the noise, calling move_vma() is obviously more expensive.
>
Thank you for testing.
> While my solution at http://lkml.org/lkml/2010/4/30/198 is cheapest as it
> does not touch exec() at all, is_vma_temporary_stack() could be broken in
> the future if any of the assumptions it makes change.
>
> So what you have is an inverse relationship between magic and
> performance. Mine has the most magic and is fastest. Kamezawa's has the
> least magic but slowest and Andrea has the goldilocks factor. Which do
> you prefer?
>
I like the fastest one ;)
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-09 19:56 ` Linus Torvalds
2010-05-09 20:06 ` Linus Torvalds
@ 2010-05-10 0:40 ` KAMEZAWA Hiroyuki
2010-05-10 1:30 ` Linus Torvalds
2010-05-10 0:42 ` KAMEZAWA Hiroyuki
2010-05-10 13:49 ` Mel Gorman
3 siblings, 1 reply; 34+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-05-10 0:40 UTC (permalink / raw)
To: Linus Torvalds
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Sun, 9 May 2010 12:56:49 -0700 (PDT)
Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
>
> On Sun, 9 May 2010, Mel Gorman wrote:
> >
> > It turns out not to be easy to the preallocating of PUDs, PMDs and PTEs
> > move_page_tables() needs. To avoid overallocating, it has to follow the same
> > logic as move_page_tables duplicating some code in the process. The ugliest
> > aspect of all is passing those pre-allocated pages back into move_page_tables
> > where they need to be passed down to such functions as __pte_alloc. It turns
> > extremely messy.
>
> Umm. What?
>
> That's crazy talk. I'm not talking about preallocating stuff in order to
> pass it in to move_page_tables(). I'm talking about just _creating_ the
> dang page tables early - preallocating them IN THE PROCESS VM SPACE.
>
> IOW, a patch like this (this is a pseudo-patch, totally untested, won't
> compile, yadda yadda - you need to actually make the people who call
> "move_page_tables()" call that prepare function first etc etc)
>
> Yeah, if we care about holes in the page tables, we can certainly copy
> more of the move_page_tables() logic, but it certainly doesn't matter for
> execve(). This just makes sure that the destination page tables exist
> first.
>
IMHO, I think move_page_tables() itself should be implemented as your patch.
But, move_page_tables()'s failure is not a big problem. At failure,
exec will abort and no page fault will occur later. What we have to do in
this migration-patch-series is avoding inconsistent update of sets of
[page, vma->vm_start, vma->pg_off, ptes] or "dont' migrate pages in exec's
statk".
Considering cost, as Mel shows, "don't migrate apges in exec's stack" seems
reasonable. But, I still doubt this check.
+static bool is_vma_temporary_stack(struct vm_area_struct *vma)
+{
+ int maybe_stack = vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP);
+
+ if (!maybe_stack)
+ return false;
+
+ /* If only the stack is mapped, assume exec is in progress */
+ if (vma->vm_mm->map_count == 1) -------------------(*)
+ return true;
+
+ return false;
+}
+
\
Mel, can (*) be safe even on a.out format (format other than ELFs) ?
Thanks,
-Kame
> Linus
>
> ---
> mm/mremap.c | 22 +++++++++++++++++++++-
> 1 files changed, 21 insertions(+), 1 deletions(-)
>
> diff --git a/mm/mremap.c b/mm/mremap.c
> index cde56ee..c14505c 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -128,6 +128,26 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
>
> #define LATENCY_LIMIT (64 * PAGE_SIZE)
>
> +/*
> + * Preallocate the page tables, so that we can do the actual move
> + * without any allocations, and thus no error handling etc.
> + */
> +int prepare_move_page_tables(struct vm_area_struct *vma,
> + unsigned long old_addr, struct vm_area_struct *new_vma,
> + unsigned long new_addr, unsigned long len)
> +{
> + unsigned long end_addr = new_addr + len;
> +
> + while (new_addr < end_addr) {
> + pmd_t *new_pmd;
> + new_pmd = alloc_new_pmd(vma->vm_mm, new_addr);
> + if (!new_pmd)
> + return -ENOMEM;
> + new_addr = (new_addr + PMD_SIZE) & PMD_MASK;
> + }
> + return 0;
> +}
> +
> unsigned long move_page_tables(struct vm_area_struct *vma,
> unsigned long old_addr, struct vm_area_struct *new_vma,
> unsigned long new_addr, unsigned long len)
> @@ -147,7 +167,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> old_pmd = get_old_pmd(vma->vm_mm, old_addr);
> if (!old_pmd)
> continue;
> - new_pmd = alloc_new_pmd(vma->vm_mm, new_addr);
> + new_pmd = get_old_pmd(vma->vm_mm, new_addr);
> if (!new_pmd)
> break;
> next = (new_addr + PMD_SIZE) & PMD_MASK;
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-09 19:56 ` Linus Torvalds
2010-05-09 20:06 ` Linus Torvalds
2010-05-10 0:40 ` KAMEZAWA Hiroyuki
@ 2010-05-10 0:42 ` KAMEZAWA Hiroyuki
2010-05-10 14:02 ` Mel Gorman
2010-05-10 13:49 ` Mel Gorman
3 siblings, 1 reply; 34+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-05-10 0:42 UTC (permalink / raw)
To: Linus Torvalds
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Sun, 9 May 2010 12:56:49 -0700 (PDT)
Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
>
> On Sun, 9 May 2010, Mel Gorman wrote:
> >
> > It turns out not to be easy to the preallocating of PUDs, PMDs and PTEs
> > move_page_tables() needs. To avoid overallocating, it has to follow the same
> > logic as move_page_tables duplicating some code in the process. The ugliest
> > aspect of all is passing those pre-allocated pages back into move_page_tables
> > where they need to be passed down to such functions as __pte_alloc. It turns
> > extremely messy.
>
> Umm. What?
>
> That's crazy talk. I'm not talking about preallocating stuff in order to
> pass it in to move_page_tables(). I'm talking about just _creating_ the
> dang page tables early - preallocating them IN THE PROCESS VM SPACE.
>
> IOW, a patch like this (this is a pseudo-patch, totally untested, won't
> compile, yadda yadda - you need to actually make the people who call
> "move_page_tables()" call that prepare function first etc etc)
>
> Yeah, if we care about holes in the page tables, we can certainly copy
> more of the move_page_tables() logic, but it certainly doesn't matter for
> execve(). This just makes sure that the destination page tables exist
> first.
>
IMHO, I think move_page_tables() itself should be implemented as your patch.
But, move_page_tables()'s failure is not a big problem. At failure,
exec will abort and no page fault will occur later. What we have to do in
this migration-patch-series is avoding inconsistent update of sets of
[page, vma->vm_start, vma->pg_off, ptes] or "dont' migrate pages in exec's
statk".
Considering cost, as Mel shows, "don't migrate pages in exec's stack" seems
reasonable. But, I still doubt this check.
+static bool is_vma_temporary_stack(struct vm_area_struct *vma)
+{
+ int maybe_stack = vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP);
+
+ if (!maybe_stack)
+ return false;
+
+ /* If only the stack is mapped, assume exec is in progress */
+ if (vma->vm_mm->map_count == 1) -------------------(*)
+ return true;
+
+ return false;
+}
+
Mel, can (*) be safe even on a.out format (format other than ELFs) ?
Thanks,
-Kame
> Linus
>
> ---
> mm/mremap.c | 22 +++++++++++++++++++++-
> 1 files changed, 21 insertions(+), 1 deletions(-)
>
> diff --git a/mm/mremap.c b/mm/mremap.c
> index cde56ee..c14505c 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -128,6 +128,26 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
>
> #define LATENCY_LIMIT (64 * PAGE_SIZE)
>
> +/*
> + * Preallocate the page tables, so that we can do the actual move
> + * without any allocations, and thus no error handling etc.
> + */
> +int prepare_move_page_tables(struct vm_area_struct *vma,
> + unsigned long old_addr, struct vm_area_struct *new_vma,
> + unsigned long new_addr, unsigned long len)
> +{
> + unsigned long end_addr = new_addr + len;
> +
> + while (new_addr < end_addr) {
> + pmd_t *new_pmd;
> + new_pmd = alloc_new_pmd(vma->vm_mm, new_addr);
> + if (!new_pmd)
> + return -ENOMEM;
> + new_addr = (new_addr + PMD_SIZE) & PMD_MASK;
> + }
> + return 0;
> +}
> +
> unsigned long move_page_tables(struct vm_area_struct *vma,
> unsigned long old_addr, struct vm_area_struct *new_vma,
> unsigned long new_addr, unsigned long len)
> @@ -147,7 +167,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> old_pmd = get_old_pmd(vma->vm_mm, old_addr);
> if (!old_pmd)
> continue;
> - new_pmd = alloc_new_pmd(vma->vm_mm, new_addr);
> + new_pmd = get_old_pmd(vma->vm_mm, new_addr);
> if (!new_pmd)
> break;
> next = (new_addr + PMD_SIZE) & PMD_MASK;
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-10 0:40 ` KAMEZAWA Hiroyuki
@ 2010-05-10 1:30 ` Linus Torvalds
2010-05-10 1:32 ` Linus Torvalds
0 siblings, 1 reply; 34+ messages in thread
From: Linus Torvalds @ 2010-05-10 1:30 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Mon, 10 May 2010, KAMEZAWA Hiroyuki wrote:
>
> But, move_page_tables()'s failure is not a big problem.
Well, yes and no.
It's not a problem because it fails, but because it does the allocation.
Which means that we can't protect the thing with the (natural) anon_vma
locking.
> Considering cost, as Mel shows, "don't migrate apges in exec's stack" seems
> reasonable. But, I still doubt this check.
Well, I actually always liked Mel's patch, the one he calls "magic". I
think it's _less_ magic than the crazy "let's create another vma and
anon_vma chain just because migration has it's thumb up its ass".
So I never disliked that patch. I'm perfectly happy with a "don't migrate
these pages at all, because they are in a half-way state in the middle of
execve stack magic".
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-10 1:30 ` Linus Torvalds
@ 2010-05-10 1:32 ` Linus Torvalds
2010-05-10 1:40 ` KAMEZAWA Hiroyuki
0 siblings, 1 reply; 34+ messages in thread
From: Linus Torvalds @ 2010-05-10 1:32 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Sun, 9 May 2010, Linus Torvalds wrote:
>
> So I never disliked that patch. I'm perfectly happy with a "don't migrate
> these pages at all, because they are in a half-way state in the middle of
> execve stack magic".
Btw, I also think that Mel's patch could be made a lot _less_ magic by
just marking that initial stack vma with a VM_STACK_INCOMPLETE_SETUP bit,
instead of doing that "maybe_stack" thing. We could easily make that
initial vma setup very explicit indeed, and then just clear that bit when
we've moved the stack to its final position.
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-10 1:32 ` Linus Torvalds
@ 2010-05-10 1:40 ` KAMEZAWA Hiroyuki
2010-05-10 1:49 ` Linus Torvalds
2010-05-10 13:24 ` Mel Gorman
0 siblings, 2 replies; 34+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-05-10 1:40 UTC (permalink / raw)
To: Linus Torvalds
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Sun, 9 May 2010 18:32:32 -0700 (PDT)
Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
>
> On Sun, 9 May 2010, Linus Torvalds wrote:
> >
> > So I never disliked that patch. I'm perfectly happy with a "don't migrate
> > these pages at all, because they are in a half-way state in the middle of
> > execve stack magic".
>
> Btw, I also think that Mel's patch could be made a lot _less_ magic by
> just marking that initial stack vma with a VM_STACK_INCOMPLETE_SETUP bit,
> instead of doing that "maybe_stack" thing. We could easily make that
> initial vma setup very explicit indeed, and then just clear that bit when
> we've moved the stack to its final position.
>
Hmm. vm_flags is still 32bit..(I think it should be long long)
Using combination of existing flags...
#define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEC_READ)
Can be used instead of checking mapcount, I think.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-10 1:40 ` KAMEZAWA Hiroyuki
@ 2010-05-10 1:49 ` Linus Torvalds
2010-05-10 13:24 ` Mel Gorman
1 sibling, 0 replies; 34+ messages in thread
From: Linus Torvalds @ 2010-05-10 1:49 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Mel Gorman, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Mon, 10 May 2010, KAMEZAWA Hiroyuki wrote:
>
> Hmm. vm_flags is still 32bit..(I think it should be long long)
>
> Using combination of existing flags...
>
> #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEC_READ)
>
> Can be used instead of checking mapcount, I think.
Ahh, yes. We can also do things like not having VM_MAY_READ/WRITE set.
That's impossible on a real mapping - even if it's not readable, it is
always something you could mprotect to _be_ readable.
The point being, we can make the tests more explicit, and less "magic that
happens to work". As long as it's ok to just say "don't migrate pages in
this mapping yet, because we're still setting it up".
Linus
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-10 1:40 ` KAMEZAWA Hiroyuki
2010-05-10 1:49 ` Linus Torvalds
@ 2010-05-10 13:24 ` Mel Gorman
2010-05-10 23:55 ` KAMEZAWA Hiroyuki
1 sibling, 1 reply; 34+ messages in thread
From: Mel Gorman @ 2010-05-10 13:24 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Linus Torvalds, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Mon, May 10, 2010 at 10:40:39AM +0900, KAMEZAWA Hiroyuki wrote:
> On Sun, 9 May 2010 18:32:32 -0700 (PDT)
> Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
> >
> >
> > On Sun, 9 May 2010, Linus Torvalds wrote:
> > >
> > > So I never disliked that patch. I'm perfectly happy with a "don't migrate
> > > these pages at all, because they are in a half-way state in the middle of
> > > execve stack magic".
> >
> > Btw, I also think that Mel's patch could be made a lot _less_ magic by
> > just marking that initial stack vma with a VM_STACK_INCOMPLETE_SETUP bit,
> > instead of doing that "maybe_stack" thing. We could easily make that
> > initial vma setup very explicit indeed, and then just clear that bit when
> > we've moved the stack to its final position.
> >
>
> Hmm. vm_flags is still 32bit..(I think it should be long long)
>
This is why I didn't use a bit in vm_flags and I didn't want to increase
the size of any structure. I had thought of using a combination of flags
but thought people would prefer a test based on existing information such as
map_count. That said, I also made terrible choices for possible combination
of flags :)
> Using combination of existing flags...
>
> #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEC_READ)
>
> Can be used instead of checking mapcount, I think.
>
It can and this is a very good combination of flags to base the test on.
How does the following look?
==== CUT HERE ====
mm,migration: Avoid race between shift_arg_pages() and rmap_walk() during migration by not migrating temporary stacks
Page migration requires rmap to be able to find all ptes mapping a page
at all times, otherwise the migration entry can be instantiated, but it
is possible to leave one behind if the second rmap_walk fails to find
the page. If this page is later faulted, migration_entry_to_page() will
call BUG because the page is locked indicating the page was migrated by
the migration PTE not cleaned up. For example
[ 511.201534] kernel BUG at include/linux/swapops.h:105!
[ 511.201534] invalid opcode: 0000 [#1] PREEMPT SMP
[ 511.201534] last sysfs file: /sys/block/sde/size
[ 511.201534] CPU 0
[ 511.201534] Modules linked in: kvm_amd kvm dm_crypt loop i2c_piix4 serio_raw tpm_tis shpchp evdev tpm i2c_core pci_hotplug tpm_bios wmi processor button ext3 jbd mbcache dm_mirror dm_region_hash dm_log dm_snapshot dm_mod raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 multipath linear md_mod sg sr_mod cdrom sd_mod ata_generic ahci libahci libata ide_pci_generic ehci_hcd ide_core r8169 mii ohci_hcd scsi_mod floppy thermal fan thermal_sys
[ 511.888526]
[ 511.888526] Pid: 20431, comm: date Not tainted 2.6.34-rc4-mm1-fix-swapops #6 GA-MA790GP-UD4H/GA-MA790GP-UD4H
[ 511.888526] RIP: 0010:[<ffffffff811094ff>] [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
[ 512.173545] RSP: 0018:ffff880037b979d8 EFLAGS: 00010246
[ 512.198503] RAX: ffffea0000000000 RBX: ffffea0001a2ba10 RCX: 0000000000029830
[ 512.329617] RDX: 0000000001a2ba10 RSI: ffffffff818264b8 RDI: 000000000ef45c3e
[ 512.380001] RBP: ffff880037b97a08 R08: ffff880078003f00 R09: ffff880037b979e8
[ 512.380001] R10: ffffffff8114ddaa R11: 0000000000000246 R12: 0000000037304000
[ 512.380001] R13: ffff88007a9ed5c8 R14: f800000000077a2e R15: 000000000ef45c3e
[ 512.380001] FS: 00007f3d346866e0(0000) GS:ffff880002200000(0000) knlGS:0000000000000000
[ 512.380001] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 512.380001] CR2: 00007fff6abec9c1 CR3: 0000000037a15000 CR4: 00000000000006f0
[ 512.380001] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 513.004775] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 513.068415] Process date (pid: 20431, threadinfo ffff880037b96000, task ffff880078003f00)
[ 513.068415] Stack:
[ 513.068415] ffff880037b97b98 ffff880037b97a18 ffff880037b97be8 0000000000000c00
[ 513.228068] <0> ffff880037304f60 00007fff6abec9c1 ffff880037b97aa8 ffffffff810e951a
[ 513.228068] <0> ffff880037b97a88 0000000000000246 0000000000000000 ffffffff8130c5c2
[ 513.228068] Call Trace:
[ 513.228068] [<ffffffff810e951a>] handle_mm_fault+0x3f8/0x76a
[ 513.228068] [<ffffffff8130c5c2>] ? do_page_fault+0x26a/0x46e
[ 513.228068] [<ffffffff8130c7a2>] do_page_fault+0x44a/0x46e
[ 513.720755] [<ffffffff8130875d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
[ 513.789278] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
[ 513.851506] [<ffffffff813099b5>] page_fault+0x25/0x30
[ 513.851506] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
[ 513.851506] [<ffffffff811c1e27>] ? strnlen_user+0x3f/0x57
[ 513.851506] [<ffffffff8114de33>] load_elf_binary+0x152a/0x192b
[ 513.851506] [<ffffffff8111329b>] search_binary_handler+0x173/0x313
[ 513.851506] [<ffffffff8114c909>] ? load_elf_binary+0x0/0x192b
[ 513.851506] [<ffffffff81114896>] do_execve+0x219/0x30a
[ 513.851506] [<ffffffff8111887f>] ? getname+0x14d/0x1b3
[ 513.851506] [<ffffffff8100a5c6>] sys_execve+0x43/0x5e
[ 514.483501] [<ffffffff8100320a>] stub_execve+0x6a/0xc0
[ 514.548357] Code: 74 05 83 f8 1f 75 68 48 b8 ff ff ff ff ff ff ff 07 48 21 c2 48 b8 00 00 00 00 00 ea ff ff 48 6b d2 38 48 8d 1c 02 f6 03 01 75 04 <0f> 0b eb fe 8b 4b 08 48 8d 73 08 85 c9 74 35 8d 41 01 89 4d e0
[ 514.704292] RIP [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
[ 514.808221] RSP <ffff880037b979d8>
[ 514.906179] ---[ end trace 4f88495edc224d6b ]---
There is a race between shift_arg_pages and migration that triggers this bug.
A temporary stack is setup during exec and later moved. If migration moves
a page in the temporary stack and the VMA is then removed before migration
completes, the migration PTE may not be found leading to a BUG when the
stack is faulted.
This patch causes pages within the temporary stack during exec to be skipped
by migration. It does this by marking the VMA covering the temporary stack
with an otherwise impossible combination of VMA flags. These flags are
cleared when the temporary stack is moved to its final location.
[kamezawa.hiroyu@jp.fujitsu.com: Idea for identifying and skipping temporary stacks]
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
fs/exec.c | 7 ++++++-
include/linux/mm.h | 3 +++
mm/rmap.c | 30 +++++++++++++++++++++++++++++-
3 files changed, 38 insertions(+), 2 deletions(-)
diff --git a/fs/exec.c b/fs/exec.c
index 725d7ef..13f8e7f 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -242,9 +242,10 @@ static int __bprm_mm_init(struct linux_binprm *bprm)
* use STACK_TOP because that can depend on attributes which aren't
* configured yet.
*/
+ BUG_ON(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP);
vma->vm_end = STACK_TOP_MAX;
vma->vm_start = vma->vm_end - PAGE_SIZE;
- vma->vm_flags = VM_STACK_FLAGS;
+ vma->vm_flags = VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP;
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
INIT_LIST_HEAD(&vma->anon_vma_chain);
err = insert_vm_struct(mm, vma);
@@ -616,6 +617,7 @@ int setup_arg_pages(struct linux_binprm *bprm,
else if (executable_stack == EXSTACK_DISABLE_X)
vm_flags &= ~VM_EXEC;
vm_flags |= mm->def_flags;
+ vm_flags |= VM_STACK_INCOMPLETE_SETUP;
ret = mprotect_fixup(vma, &prev, vma->vm_start, vma->vm_end,
vm_flags);
@@ -630,6 +632,9 @@ int setup_arg_pages(struct linux_binprm *bprm,
goto out_unlock;
}
+ /* mprotect_fixup is overkill to remove the temporary stack flags */
+ vma->vm_flags &= ~VM_STACK_INCOMPLETE_SETUP;
+
stack_expand = 131072UL; /* randomly 32*4k (or 2*64k) pages */
stack_size = vma->vm_end - vma->vm_start;
/*
diff --git a/include/linux/mm.h b/include/linux/mm.h
index eb21256..925f5bc 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -106,6 +106,9 @@ extern unsigned int kobjsize(const void *objp);
#define VM_PFN_AT_MMAP 0x40000000 /* PFNMAP vma that is fully mapped at mmap time */
#define VM_MERGEABLE 0x80000000 /* KSM may merge identical pages */
+/* Bits set in the VMA until the stack is in its final location */
+#define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ)
+
#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */
#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
#endif
diff --git a/mm/rmap.c b/mm/rmap.c
index 85f203e..e96565f 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1141,6 +1141,20 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
return ret;
}
+static bool is_vma_temporary_stack(struct vm_area_struct *vma)
+{
+ int maybe_stack = vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP);
+
+ if (!maybe_stack)
+ return false;
+
+ if ((vma->vm_flags & VM_STACK_INCOMPLETE_SETUP) ==
+ VM_STACK_INCOMPLETE_SETUP)
+ return true;
+
+ return false;
+}
+
/**
* try_to_unmap_anon - unmap or unlock anonymous page using the object-based
* rmap method
@@ -1169,7 +1183,21 @@ static int try_to_unmap_anon(struct page *page, enum ttu_flags flags)
list_for_each_entry(avc, &anon_vma->head, same_anon_vma) {
struct vm_area_struct *vma = avc->vma;
- unsigned long address = vma_address(page, vma);
+ unsigned long address;
+
+ /*
+ * During exec, a temporary VMA is setup and later moved.
+ * The VMA is moved under the anon_vma lock but not the
+ * page tables leading to a race where migration cannot
+ * find the migration ptes. Rather than increasing the
+ * locking requirements of exec(), migration skips
+ * temporary VMAs until after exec() completes.
+ */
+ if (PAGE_MIGRATION && (flags & TTU_MIGRATION) &&
+ is_vma_temporary_stack(vma))
+ continue;
+
+ address = vma_address(page, vma);
if (address == -EFAULT)
continue;
ret = try_to_unmap_one(page, vma, address, flags);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-09 19:56 ` Linus Torvalds
` (2 preceding siblings ...)
2010-05-10 0:42 ` KAMEZAWA Hiroyuki
@ 2010-05-10 13:49 ` Mel Gorman
3 siblings, 0 replies; 34+ messages in thread
From: Mel Gorman @ 2010-05-10 13:49 UTC (permalink / raw)
To: Linus Torvalds
Cc: KAMEZAWA Hiroyuki, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Sun, May 09, 2010 at 12:56:49PM -0700, Linus Torvalds wrote:
>
>
> On Sun, 9 May 2010, Mel Gorman wrote:
> >
> > It turns out not to be easy to the preallocating of PUDs, PMDs and PTEs
> > move_page_tables() needs. To avoid overallocating, it has to follow the same
> > logic as move_page_tables duplicating some code in the process. The ugliest
> > aspect of all is passing those pre-allocated pages back into move_page_tables
> > where they need to be passed down to such functions as __pte_alloc. It turns
> > extremely messy.
>
> Umm. What?
>
> That's crazy talk. I'm not talking about preallocating stuff in order to
> pass it in to move_page_tables(). I'm talking about just _creating_ the
> dang page tables early - preallocating them IN THE PROCESS VM SPACE.
>
Ok, I took the totally wrong approach by pre-allocating the pages and passing
them in to move_page_tables. Pages from the list were then taken in preference
to calling the page allocator. At the time, I thought the cleanup for ENOMEM
would be easier as well as avoiding complications with overlapping temporary
stack and new stack location. It was a bad choice, wrong and damn fugly.
> IOW, a patch like this (this is a pseudo-patch, totally untested, won't
> compile, yadda yadda - you need to actually make the people who call
> "move_page_tables()" call that prepare function first etc etc)
>
Sounds reasonable as a general approach. I'll tinker with it for a bit to
see if exec can be made any faster when combined with your other suggestion
to avoid mprotect_fixup. In the meantime though, can the patch that avoids
migrating within the temporary stack be picked up? It closes the worst of
the migrate vs exec race without impacting performance. At least, I'm
no longer able to cause an oops with it applied.
> Yeah, if we care about holes in the page tables, we can certainly copy
> more of the move_page_tables() logic, but it certainly doesn't matter for
> execve(). This just makes sure that the destination page tables exist
> first.
>
> Linus
>
> ---
> mm/mremap.c | 22 +++++++++++++++++++++-
> 1 files changed, 21 insertions(+), 1 deletions(-)
>
> diff --git a/mm/mremap.c b/mm/mremap.c
> index cde56ee..c14505c 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -128,6 +128,26 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
>
> #define LATENCY_LIMIT (64 * PAGE_SIZE)
>
> +/*
> + * Preallocate the page tables, so that we can do the actual move
> + * without any allocations, and thus no error handling etc.
> + */
> +int prepare_move_page_tables(struct vm_area_struct *vma,
> + unsigned long old_addr, struct vm_area_struct *new_vma,
> + unsigned long new_addr, unsigned long len)
> +{
> + unsigned long end_addr = new_addr + len;
> +
> + while (new_addr < end_addr) {
> + pmd_t *new_pmd;
> + new_pmd = alloc_new_pmd(vma->vm_mm, new_addr);
> + if (!new_pmd)
> + return -ENOMEM;
> + new_addr = (new_addr + PMD_SIZE) & PMD_MASK;
> + }
> + return 0;
> +}
> +
> unsigned long move_page_tables(struct vm_area_struct *vma,
> unsigned long old_addr, struct vm_area_struct *new_vma,
> unsigned long new_addr, unsigned long len)
> @@ -147,7 +167,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> old_pmd = get_old_pmd(vma->vm_mm, old_addr);
> if (!old_pmd)
> continue;
> - new_pmd = alloc_new_pmd(vma->vm_mm, new_addr);
> + new_pmd = get_old_pmd(vma->vm_mm, new_addr);
> if (!new_pmd)
> break;
> next = (new_addr + PMD_SIZE) & PMD_MASK;
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-10 0:42 ` KAMEZAWA Hiroyuki
@ 2010-05-10 14:02 ` Mel Gorman
0 siblings, 0 replies; 34+ messages in thread
From: Mel Gorman @ 2010-05-10 14:02 UTC (permalink / raw)
To: KAMEZAWA Hiroyuki
Cc: Linus Torvalds, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Mon, May 10, 2010 at 09:42:38AM +0900, KAMEZAWA Hiroyuki wrote:
> On Sun, 9 May 2010 12:56:49 -0700 (PDT)
> Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
> >
> >
> > On Sun, 9 May 2010, Mel Gorman wrote:
> > >
> > > It turns out not to be easy to the preallocating of PUDs, PMDs and PTEs
> > > move_page_tables() needs. To avoid overallocating, it has to follow the same
> > > logic as move_page_tables duplicating some code in the process. The ugliest
> > > aspect of all is passing those pre-allocated pages back into move_page_tables
> > > where they need to be passed down to such functions as __pte_alloc. It turns
> > > extremely messy.
> >
> > Umm. What?
> >
> > That's crazy talk. I'm not talking about preallocating stuff in order to
> > pass it in to move_page_tables(). I'm talking about just _creating_ the
> > dang page tables early - preallocating them IN THE PROCESS VM SPACE.
> >
> > IOW, a patch like this (this is a pseudo-patch, totally untested, won't
> > compile, yadda yadda - you need to actually make the people who call
> > "move_page_tables()" call that prepare function first etc etc)
> >
> > Yeah, if we care about holes in the page tables, we can certainly copy
> > more of the move_page_tables() logic, but it certainly doesn't matter for
> > execve(). This just makes sure that the destination page tables exist
> > first.
> >
> IMHO, I think move_page_tables() itself should be implemented as your patch.
>
> But, move_page_tables()'s failure is not a big problem. At failure,
> exec will abort and no page fault will occur later. What we have to do in
> this migration-patch-series is avoding inconsistent update of sets of
> [page, vma->vm_start, vma->pg_off, ptes] or "dont' migrate pages in exec's
> statk".
>
> Considering cost, as Mel shows, "don't migrate pages in exec's stack" seems
> reasonable. But, I still doubt this check.
>
> +static bool is_vma_temporary_stack(struct vm_area_struct *vma)
> +{
> + int maybe_stack = vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP);
> +
> + if (!maybe_stack)
> + return false;
> +
> + /* If only the stack is mapped, assume exec is in progress */
> + if (vma->vm_mm->map_count == 1) -------------------(*)
> + return true;
> +
> + return false;
> +}
> +
>
> Mel, can (*) be safe even on a.out format (format other than ELFs) ?
>
I felt it was safe because this happens before search_binary_handler is
called to find a handler to load the binary. Still, the suggestion to
use an impossible combination of VMA flags is more robust against any
future change.
> <SNIP>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack
2010-05-10 13:24 ` Mel Gorman
@ 2010-05-10 23:55 ` KAMEZAWA Hiroyuki
0 siblings, 0 replies; 34+ messages in thread
From: KAMEZAWA Hiroyuki @ 2010-05-10 23:55 UTC (permalink / raw)
To: Mel Gorman
Cc: Linus Torvalds, Andrew Morton, Linux-MM, LKML, Minchan Kim,
Christoph Lameter, Andrea Arcangeli, Rik van Riel, Peter Zijlstra
On Mon, 10 May 2010 14:24:31 +0100
Mel Gorman <mel@csn.ul.ie> wrote:
> On Mon, May 10, 2010 at 10:40:39AM +0900, KAMEZAWA Hiroyuki wrote:
> > On Sun, 9 May 2010 18:32:32 -0700 (PDT)
> > Linus Torvalds <torvalds@linux-foundation.org> wrote:
> >
> > >
> > >
> > > On Sun, 9 May 2010, Linus Torvalds wrote:
> > > >
> > > > So I never disliked that patch. I'm perfectly happy with a "don't migrate
> > > > these pages at all, because they are in a half-way state in the middle of
> > > > execve stack magic".
> > >
> > > Btw, I also think that Mel's patch could be made a lot _less_ magic by
> > > just marking that initial stack vma with a VM_STACK_INCOMPLETE_SETUP bit,
> > > instead of doing that "maybe_stack" thing. We could easily make that
> > > initial vma setup very explicit indeed, and then just clear that bit when
> > > we've moved the stack to its final position.
> > >
> >
> > Hmm. vm_flags is still 32bit..(I think it should be long long)
> >
>
> This is why I didn't use a bit in vm_flags and I didn't want to increase
> the size of any structure. I had thought of using a combination of flags
> but thought people would prefer a test based on existing information such as
> map_count. That said, I also made terrible choices for possible combination
> of flags :)
>
> > Using combination of existing flags...
> >
> > #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEC_READ)
> >
> > Can be used instead of checking mapcount, I think.
> >
>
> It can and this is a very good combination of flags to base the test on.
> How does the following look?
>
Seems nice to me.
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> ==== CUT HERE ====
> mm,migration: Avoid race between shift_arg_pages() and rmap_walk() during migration by not migrating temporary stacks
>
> Page migration requires rmap to be able to find all ptes mapping a page
> at all times, otherwise the migration entry can be instantiated, but it
> is possible to leave one behind if the second rmap_walk fails to find
> the page. If this page is later faulted, migration_entry_to_page() will
> call BUG because the page is locked indicating the page was migrated by
> the migration PTE not cleaned up. For example
>
> [ 511.201534] kernel BUG at include/linux/swapops.h:105!
> [ 511.201534] invalid opcode: 0000 [#1] PREEMPT SMP
> [ 511.201534] last sysfs file: /sys/block/sde/size
> [ 511.201534] CPU 0
> [ 511.201534] Modules linked in: kvm_amd kvm dm_crypt loop i2c_piix4 serio_raw tpm_tis shpchp evdev tpm i2c_core pci_hotplug tpm_bios wmi processor button ext3 jbd mbcache dm_mirror dm_region_hash dm_log dm_snapshot dm_mod raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 multipath linear md_mod sg sr_mod cdrom sd_mod ata_generic ahci libahci libata ide_pci_generic ehci_hcd ide_core r8169 mii ohci_hcd scsi_mod floppy thermal fan thermal_sys
> [ 511.888526]
> [ 511.888526] Pid: 20431, comm: date Not tainted 2.6.34-rc4-mm1-fix-swapops #6 GA-MA790GP-UD4H/GA-MA790GP-UD4H
> [ 511.888526] RIP: 0010:[<ffffffff811094ff>] [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
> [ 512.173545] RSP: 0018:ffff880037b979d8 EFLAGS: 00010246
> [ 512.198503] RAX: ffffea0000000000 RBX: ffffea0001a2ba10 RCX: 0000000000029830
> [ 512.329617] RDX: 0000000001a2ba10 RSI: ffffffff818264b8 RDI: 000000000ef45c3e
> [ 512.380001] RBP: ffff880037b97a08 R08: ffff880078003f00 R09: ffff880037b979e8
> [ 512.380001] R10: ffffffff8114ddaa R11: 0000000000000246 R12: 0000000037304000
> [ 512.380001] R13: ffff88007a9ed5c8 R14: f800000000077a2e R15: 000000000ef45c3e
> [ 512.380001] FS: 00007f3d346866e0(0000) GS:ffff880002200000(0000) knlGS:0000000000000000
> [ 512.380001] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> [ 512.380001] CR2: 00007fff6abec9c1 CR3: 0000000037a15000 CR4: 00000000000006f0
> [ 512.380001] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [ 513.004775] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> [ 513.068415] Process date (pid: 20431, threadinfo ffff880037b96000, task ffff880078003f00)
> [ 513.068415] Stack:
> [ 513.068415] ffff880037b97b98 ffff880037b97a18 ffff880037b97be8 0000000000000c00
> [ 513.228068] <0> ffff880037304f60 00007fff6abec9c1 ffff880037b97aa8 ffffffff810e951a
> [ 513.228068] <0> ffff880037b97a88 0000000000000246 0000000000000000 ffffffff8130c5c2
> [ 513.228068] Call Trace:
> [ 513.228068] [<ffffffff810e951a>] handle_mm_fault+0x3f8/0x76a
> [ 513.228068] [<ffffffff8130c5c2>] ? do_page_fault+0x26a/0x46e
> [ 513.228068] [<ffffffff8130c7a2>] do_page_fault+0x44a/0x46e
> [ 513.720755] [<ffffffff8130875d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
> [ 513.789278] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
> [ 513.851506] [<ffffffff813099b5>] page_fault+0x25/0x30
> [ 513.851506] [<ffffffff8114ddaa>] ? load_elf_binary+0x14a1/0x192b
> [ 513.851506] [<ffffffff811c1e27>] ? strnlen_user+0x3f/0x57
> [ 513.851506] [<ffffffff8114de33>] load_elf_binary+0x152a/0x192b
> [ 513.851506] [<ffffffff8111329b>] search_binary_handler+0x173/0x313
> [ 513.851506] [<ffffffff8114c909>] ? load_elf_binary+0x0/0x192b
> [ 513.851506] [<ffffffff81114896>] do_execve+0x219/0x30a
> [ 513.851506] [<ffffffff8111887f>] ? getname+0x14d/0x1b3
> [ 513.851506] [<ffffffff8100a5c6>] sys_execve+0x43/0x5e
> [ 514.483501] [<ffffffff8100320a>] stub_execve+0x6a/0xc0
> [ 514.548357] Code: 74 05 83 f8 1f 75 68 48 b8 ff ff ff ff ff ff ff 07 48 21 c2 48 b8 00 00 00 00 00 ea ff ff 48 6b d2 38 48 8d 1c 02 f6 03 01 75 04 <0f> 0b eb fe 8b 4b 08 48 8d 73 08 85 c9 74 35 8d 41 01 89 4d e0
> [ 514.704292] RIP [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
> [ 514.808221] RSP <ffff880037b979d8>
> [ 514.906179] ---[ end trace 4f88495edc224d6b ]---
>
> There is a race between shift_arg_pages and migration that triggers this bug.
> A temporary stack is setup during exec and later moved. If migration moves
> a page in the temporary stack and the VMA is then removed before migration
> completes, the migration PTE may not be found leading to a BUG when the
> stack is faulted.
>
> This patch causes pages within the temporary stack during exec to be skipped
> by migration. It does this by marking the VMA covering the temporary stack
> with an otherwise impossible combination of VMA flags. These flags are
> cleared when the temporary stack is moved to its final location.
>
> [kamezawa.hiroyu@jp.fujitsu.com: Idea for identifying and skipping temporary stacks]
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> ---
> fs/exec.c | 7 ++++++-
> include/linux/mm.h | 3 +++
> mm/rmap.c | 30 +++++++++++++++++++++++++++++-
> 3 files changed, 38 insertions(+), 2 deletions(-)
>
> diff --git a/fs/exec.c b/fs/exec.c
> index 725d7ef..13f8e7f 100644
> --- a/fs/exec.c
> +++ b/fs/exec.c
> @@ -242,9 +242,10 @@ static int __bprm_mm_init(struct linux_binprm *bprm)
> * use STACK_TOP because that can depend on attributes which aren't
> * configured yet.
> */
> + BUG_ON(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP);
> vma->vm_end = STACK_TOP_MAX;
> vma->vm_start = vma->vm_end - PAGE_SIZE;
> - vma->vm_flags = VM_STACK_FLAGS;
> + vma->vm_flags = VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP;
> vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
> INIT_LIST_HEAD(&vma->anon_vma_chain);
> err = insert_vm_struct(mm, vma);
> @@ -616,6 +617,7 @@ int setup_arg_pages(struct linux_binprm *bprm,
> else if (executable_stack == EXSTACK_DISABLE_X)
> vm_flags &= ~VM_EXEC;
> vm_flags |= mm->def_flags;
> + vm_flags |= VM_STACK_INCOMPLETE_SETUP;
>
> ret = mprotect_fixup(vma, &prev, vma->vm_start, vma->vm_end,
> vm_flags);
> @@ -630,6 +632,9 @@ int setup_arg_pages(struct linux_binprm *bprm,
> goto out_unlock;
> }
>
> + /* mprotect_fixup is overkill to remove the temporary stack flags */
> + vma->vm_flags &= ~VM_STACK_INCOMPLETE_SETUP;
> +
> stack_expand = 131072UL; /* randomly 32*4k (or 2*64k) pages */
> stack_size = vma->vm_end - vma->vm_start;
> /*
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index eb21256..925f5bc 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -106,6 +106,9 @@ extern unsigned int kobjsize(const void *objp);
> #define VM_PFN_AT_MMAP 0x40000000 /* PFNMAP vma that is fully mapped at mmap time */
> #define VM_MERGEABLE 0x80000000 /* KSM may merge identical pages */
>
> +/* Bits set in the VMA until the stack is in its final location */
> +#define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ)
> +
> #ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */
> #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
> #endif
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 85f203e..e96565f 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1141,6 +1141,20 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
> return ret;
> }
>
> +static bool is_vma_temporary_stack(struct vm_area_struct *vma)
> +{
> + int maybe_stack = vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP);
> +
> + if (!maybe_stack)
> + return false;
> +
> + if ((vma->vm_flags & VM_STACK_INCOMPLETE_SETUP) ==
> + VM_STACK_INCOMPLETE_SETUP)
> + return true;
> +
> + return false;
> +}
> +
> /**
> * try_to_unmap_anon - unmap or unlock anonymous page using the object-based
> * rmap method
> @@ -1169,7 +1183,21 @@ static int try_to_unmap_anon(struct page *page, enum ttu_flags flags)
>
> list_for_each_entry(avc, &anon_vma->head, same_anon_vma) {
> struct vm_area_struct *vma = avc->vma;
> - unsigned long address = vma_address(page, vma);
> + unsigned long address;
> +
> + /*
> + * During exec, a temporary VMA is setup and later moved.
> + * The VMA is moved under the anon_vma lock but not the
> + * page tables leading to a race where migration cannot
> + * find the migration ptes. Rather than increasing the
> + * locking requirements of exec(), migration skips
> + * temporary VMAs until after exec() completes.
> + */
> + if (PAGE_MIGRATION && (flags & TTU_MIGRATION) &&
> + is_vma_temporary_stack(vma))
> + continue;
> +
> + address = vma_address(page, vma);
> if (address == -EFAULT)
> continue;
> ret = try_to_unmap_one(page, vma, address, flags);
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2010-05-11 0:00 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-05-06 23:20 [PATCH 0/2] Fix migration races in rmap_walk() V7 Mel Gorman
2010-05-06 23:20 ` [PATCH 1/2] mm,migration: Prevent rmap_walk_[anon|ksm] seeing the wrong VMA information Mel Gorman
2010-05-07 0:56 ` KAMEZAWA Hiroyuki
2010-05-07 16:26 ` Mel Gorman
2010-05-08 15:39 ` Andrea Arcangeli
2010-05-08 17:02 ` Linus Torvalds
2010-05-08 18:04 ` Andrea Arcangeli
2010-05-08 19:51 ` Linus Torvalds
2010-05-09 19:23 ` Mel Gorman
2010-05-06 23:20 ` [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack Mel Gorman
2010-05-07 1:40 ` Linus Torvalds
2010-05-07 1:57 ` KAMEZAWA Hiroyuki
2010-05-07 2:12 ` Linus Torvalds
2010-05-07 4:19 ` KAMEZAWA Hiroyuki
2010-05-07 14:18 ` Linus Torvalds
2010-05-09 19:21 ` Mel Gorman
2010-05-09 19:56 ` Linus Torvalds
2010-05-09 20:06 ` Linus Torvalds
2010-05-09 20:20 ` Linus Torvalds
2010-05-10 0:40 ` KAMEZAWA Hiroyuki
2010-05-10 1:30 ` Linus Torvalds
2010-05-10 1:32 ` Linus Torvalds
2010-05-10 1:40 ` KAMEZAWA Hiroyuki
2010-05-10 1:49 ` Linus Torvalds
2010-05-10 13:24 ` Mel Gorman
2010-05-10 23:55 ` KAMEZAWA Hiroyuki
2010-05-10 0:42 ` KAMEZAWA Hiroyuki
2010-05-10 14:02 ` Mel Gorman
2010-05-10 13:49 ` Mel Gorman
2010-05-10 0:32 ` KAMEZAWA Hiroyuki
2010-05-07 9:16 ` Mel Gorman
2010-05-07 8:13 ` [PATCH 0/2] Fix migration races in rmap_walk() V7 KAMEZAWA Hiroyuki
-- strict thread matches above, loose matches on Subject: below --
2010-05-06 15:33 [PATCH 0/2] Fix migration races in rmap_walk() V6 Mel Gorman
2010-05-06 15:33 ` [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack Mel Gorman
2010-05-05 13:14 [PATCH 0/2] Fix migration races in rmap_walk() V5 Mel Gorman
2010-05-05 13:14 ` [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack Mel Gorman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).