* [PATCH] mm: softdirty: unmapped addresses between VMAs are clean @ 2014-09-10 23:24 Peter Feiner 2014-09-10 23:36 ` Andrew Morton 2014-09-15 18:40 ` [PATCH v2] " Peter Feiner 0 siblings, 2 replies; 7+ messages in thread From: Peter Feiner @ 2014-09-10 23:24 UTC (permalink / raw) To: linux-mm Cc: linux-kernel, Peter Feiner, Kirill A. Shutemov, Cyrill Gorcunov, Pavel Emelyanov, Jamie Liu, Hugh Dickins, Naoya Horiguchi, Andrew Morton If a /proc/pid/pagemap read spans a [VMA, an unmapped region, then a VM_SOFTDIRTY VMA], the virtual pages in the unmapped region are reported as softdirty. Here's a program to demonstrate the bug: int main() { const uint64_t PAGEMAP_SOFTDIRTY = 1ul << 55; uint64_t pme[3]; int fd = open("/proc/self/pagemap", O_RDONLY);; char *m = mmap(NULL, 3 * getpagesize(), PROT_READ, MAP_ANONYMOUS | MAP_SHARED, -1, 0); munmap(m + getpagesize(), getpagesize()); pread(fd, pme, 24, (unsigned long) m / getpagesize() * 8); assert(pme[0] & PAGEMAP_SOFTDIRTY); /* passes */ assert(!(pme[1] & PAGEMAP_SOFTDIRTY)); /* fails */ assert(pme[2] & PAGEMAP_SOFTDIRTY); /* passes */ return 0; } (Note that all pages in new VMAs are softdirty until cleared). Tested: Used the program given above. I'm going to include this code in a selftest in the future. Signed-off-by: Peter Feiner <pfeiner@google.com> --- fs/proc/task_mmu.c | 61 +++++++++++++++++++++++++++++++++++------------------- 1 file changed, 40 insertions(+), 21 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index dfc791c..eb92a8c 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1020,7 +1020,6 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, spinlock_t *ptl; pte_t *pte; int err = 0; - pagemap_entry_t pme = make_pme(PM_NOT_PRESENT(pm->v2)); /* find the first VMA at or above 'addr' */ vma = find_vma(walk->mm, addr); @@ -1034,6 +1033,7 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, for (; addr != end; addr += PAGE_SIZE) { unsigned long offset; + pagemap_entry_t pme; offset = (addr & ~PAGEMAP_WALK_MASK) >> PAGE_SHIFT; @@ -1048,32 +1048,51 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, if (pmd_trans_unstable(pmd)) return 0; - for (; addr != end; addr += PAGE_SIZE) { - int flags2; - - /* check to see if we've left 'vma' behind - * and need a new, higher one */ - if (vma && (addr >= vma->vm_end)) { - vma = find_vma(walk->mm, addr); - if (vma && (vma->vm_flags & VM_SOFTDIRTY)) - flags2 = __PM_SOFT_DIRTY; - else - flags2 = 0; - pme = make_pme(PM_NOT_PRESENT(pm->v2) | PM_STATUS2(pm->v2, flags2)); + + while (1) { + unsigned long vm_start = end; + unsigned long vm_end = end; + unsigned long vm_flags = 0; + + if (vma) { + /* + * We can't possibly be in a hugetlb VMA. In general, + * for a mm_walk with a pmd_entry and a hugetlb_entry, + * the pmd_entry can only be called on addresses in a + * hugetlb if the walk starts in a non-hugetlb VMA and + * spans a hugepage VMA. Since pagemap_read walks are + * PMD-sized and PMD-aligned, this will never be true. + */ + BUG_ON(is_vm_hugetlb_page(vma)); + vm_start = vma->vm_start; + vm_end = min(end, vma->vm_end); + vm_flags = vma->vm_flags; + } + + /* Addresses before the VMA. */ + for (; addr < vm_start; addr += PAGE_SIZE) { + pagemap_entry_t pme = make_pme(PM_NOT_PRESENT(pm->v2)); + + err = add_to_pagemap(addr, &pme, pm); + if (err) + return err; } - /* check that 'vma' actually covers this address, - * and that it isn't a huge page vma */ - if (vma && (vma->vm_start <= addr) && - !is_vm_hugetlb_page(vma)) { + /* Addresses in the VMA. */ + for (; addr < vm_end; addr += PAGE_SIZE) { + pagemap_entry_t pme; pte = pte_offset_map(pmd, addr); pte_to_pagemap_entry(&pme, pm, vma, addr, *pte); - /* unmap before userspace copy */ pte_unmap(pte); + err = add_to_pagemap(addr, &pme, pm); + if (err) + return err; } - err = add_to_pagemap(addr, &pme, pm); - if (err) - return err; + + if (addr == end) + break; + + vma = find_vma(walk->mm, addr); } cond_resched(); -- 2.1.0.rc2.206.gedb03e5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] mm: softdirty: unmapped addresses between VMAs are clean 2014-09-10 23:24 [PATCH] mm: softdirty: unmapped addresses between VMAs are clean Peter Feiner @ 2014-09-10 23:36 ` Andrew Morton 2014-09-11 5:41 ` Peter Feiner 2014-09-15 18:40 ` [PATCH v2] " Peter Feiner 1 sibling, 1 reply; 7+ messages in thread From: Andrew Morton @ 2014-09-10 23:36 UTC (permalink / raw) To: Peter Feiner Cc: linux-mm, linux-kernel, Kirill A. Shutemov, Cyrill Gorcunov, Pavel Emelyanov, Jamie Liu, Hugh Dickins, Naoya Horiguchi On Wed, 10 Sep 2014 16:24:46 -0700 Peter Feiner <pfeiner@google.com> wrote: > If a /proc/pid/pagemap read spans a [VMA, an unmapped region, then a > VM_SOFTDIRTY VMA], the virtual pages in the unmapped region are reported > as softdirty. Here's a program to demonstrate the bug: > > int main() { > const uint64_t PAGEMAP_SOFTDIRTY = 1ul << 55; > uint64_t pme[3]; > int fd = open("/proc/self/pagemap", O_RDONLY);; > char *m = mmap(NULL, 3 * getpagesize(), PROT_READ, > MAP_ANONYMOUS | MAP_SHARED, -1, 0); > munmap(m + getpagesize(), getpagesize()); > pread(fd, pme, 24, (unsigned long) m / getpagesize() * 8); > assert(pme[0] & PAGEMAP_SOFTDIRTY); /* passes */ > assert(!(pme[1] & PAGEMAP_SOFTDIRTY)); /* fails */ > assert(pme[2] & PAGEMAP_SOFTDIRTY); /* passes */ > return 0; > } > > (Note that all pages in new VMAs are softdirty until cleared). > > Tested: > Used the program given above. I'm going to include this code in > a selftest in the future. > > ... > > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > > ... > > @@ -1048,32 +1048,51 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, > > if (pmd_trans_unstable(pmd)) > return 0; > - for (; addr != end; addr += PAGE_SIZE) { > - int flags2; > - > - /* check to see if we've left 'vma' behind > - * and need a new, higher one */ > - if (vma && (addr >= vma->vm_end)) { > - vma = find_vma(walk->mm, addr); > - if (vma && (vma->vm_flags & VM_SOFTDIRTY)) > - flags2 = __PM_SOFT_DIRTY; > - else > - flags2 = 0; > - pme = make_pme(PM_NOT_PRESENT(pm->v2) | PM_STATUS2(pm->v2, flags2)); > + > + while (1) { > + unsigned long vm_start = end; Did you really mean to do that? If so, perhaps a little comment to explain how it works? > + unsigned long vm_end = end; > + unsigned long vm_flags = 0; > + > + if (vma) { > + /* > + * We can't possibly be in a hugetlb VMA. In general, > + * for a mm_walk with a pmd_entry and a hugetlb_entry, > + * the pmd_entry can only be called on addresses in a > + * hugetlb if the walk starts in a non-hugetlb VMA and > + * spans a hugepage VMA. Since pagemap_read walks are > + * PMD-sized and PMD-aligned, this will never be true. > + */ > + BUG_ON(is_vm_hugetlb_page(vma)); > + vm_start = vma->vm_start; > + vm_end = min(end, vma->vm_end); > + vm_flags = vma->vm_flags; > + } > + > + /* Addresses before the VMA. */ > + for (; addr < vm_start; addr += PAGE_SIZE) { > + pagemap_entry_t pme = make_pme(PM_NOT_PRESENT(pm->v2)); > + > + err = add_to_pagemap(addr, &pme, pm); > + if (err) > + return err; > > ... > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] mm: softdirty: unmapped addresses between VMAs are clean 2014-09-10 23:36 ` Andrew Morton @ 2014-09-11 5:41 ` Peter Feiner 2014-09-11 19:54 ` Andrew Morton 0 siblings, 1 reply; 7+ messages in thread From: Peter Feiner @ 2014-09-11 5:41 UTC (permalink / raw) To: Andrew Morton Cc: linux-mm, linux-kernel, Kirill A. Shutemov, Cyrill Gorcunov, Pavel Emelyanov, Jamie Liu, Hugh Dickins, Naoya Horiguchi On Wed, Sep 10, 2014 at 04:36:28PM -0700, Andrew Morton wrote: > On Wed, 10 Sep 2014 16:24:46 -0700 Peter Feiner <pfeiner@google.com> wrote: > > @@ -1048,32 +1048,51 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, > > + while (1) { > > + unsigned long vm_start = end; > > Did you really mean to do that? If so, perhaps a little comment to > explain how it works? It's the same idea that I used in the pagemap_pte_hole patch that I submitted today: if vma is NULL, then we fill in the pagemap from (addr) to (end) with non-present pagemap entries. Should I submit a v2 with a comment? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] mm: softdirty: unmapped addresses between VMAs are clean 2014-09-11 5:41 ` Peter Feiner @ 2014-09-11 19:54 ` Andrew Morton 0 siblings, 0 replies; 7+ messages in thread From: Andrew Morton @ 2014-09-11 19:54 UTC (permalink / raw) To: Peter Feiner Cc: linux-mm, linux-kernel, Kirill A. Shutemov, Cyrill Gorcunov, Pavel Emelyanov, Jamie Liu, Hugh Dickins, Naoya Horiguchi On Wed, 10 Sep 2014 22:41:04 -0700 Peter Feiner <pfeiner@google.com> wrote: > On Wed, Sep 10, 2014 at 04:36:28PM -0700, Andrew Morton wrote: > > On Wed, 10 Sep 2014 16:24:46 -0700 Peter Feiner <pfeiner@google.com> wrote: > > > @@ -1048,32 +1048,51 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, > > > + while (1) { > > > + unsigned long vm_start = end; > > > > Did you really mean to do that? If so, perhaps a little comment to > > explain how it works? > > It's the same idea that I used in the pagemap_pte_hole patch that I submitted > today: if vma is NULL, then we fill in the pagemap from (addr) to (end) with > non-present pagemap entries. > > Should I submit a v2 with a comment? I spent quite some time staring at that code wondering wtf, so anything you can do to clarify it would be good. I think a better name would be plain old "start", to communicate that it's just a local convenience variable. "vm_start" means "start of a vma" and that isn't accurate in this context; in fact it is misleading. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2] mm: softdirty: unmapped addresses between VMAs are clean 2014-09-10 23:24 [PATCH] mm: softdirty: unmapped addresses between VMAs are clean Peter Feiner 2014-09-10 23:36 ` Andrew Morton @ 2014-09-15 18:40 ` Peter Feiner 2014-09-26 20:33 ` Naoya Horiguchi 1 sibling, 1 reply; 7+ messages in thread From: Peter Feiner @ 2014-09-15 18:40 UTC (permalink / raw) To: linux-mm Cc: linux-kernel, Peter Feiner, Kirill A. Shutemov, Cyrill Gorcunov, Pavel Emelyanov, Jamie Liu, Hugh Dickins, Naoya Horiguchi, Andrew Morton If a /proc/pid/pagemap read spans a [VMA, an unmapped region, then a VM_SOFTDIRTY VMA], the virtual pages in the unmapped region are reported as softdirty. Here's a program to demonstrate the bug: int main() { const uint64_t PAGEMAP_SOFTDIRTY = 1ul << 55; uint64_t pme[3]; int fd = open("/proc/self/pagemap", O_RDONLY);; char *m = mmap(NULL, 3 * getpagesize(), PROT_READ, MAP_ANONYMOUS | MAP_SHARED, -1, 0); munmap(m + getpagesize(), getpagesize()); pread(fd, pme, 24, (unsigned long) m / getpagesize() * 8); assert(pme[0] & PAGEMAP_SOFTDIRTY); /* passes */ assert(!(pme[1] & PAGEMAP_SOFTDIRTY)); /* fails */ assert(pme[2] & PAGEMAP_SOFTDIRTY); /* passes */ return 0; } (Note that all pages in new VMAs are softdirty until cleared). Tested: Used the program given above. I'm going to include this code in a selftest in the future. Signed-off-by: Peter Feiner <pfeiner@google.com> --- v1 -> v2: Restructured patch to make logic more clear. --- fs/proc/task_mmu.c | 61 +++++++++++++++++++++++++++++++++++------------------- 1 file changed, 40 insertions(+), 21 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index dfc791c..2abf37b 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1020,7 +1020,6 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, spinlock_t *ptl; pte_t *pte; int err = 0; - pagemap_entry_t pme = make_pme(PM_NOT_PRESENT(pm->v2)); /* find the first VMA at or above 'addr' */ vma = find_vma(walk->mm, addr); @@ -1034,6 +1033,7 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, for (; addr != end; addr += PAGE_SIZE) { unsigned long offset; + pagemap_entry_t pme; offset = (addr & ~PAGEMAP_WALK_MASK) >> PAGE_SHIFT; @@ -1048,32 +1048,51 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, if (pmd_trans_unstable(pmd)) return 0; - for (; addr != end; addr += PAGE_SIZE) { - int flags2; - - /* check to see if we've left 'vma' behind - * and need a new, higher one */ - if (vma && (addr >= vma->vm_end)) { - vma = find_vma(walk->mm, addr); - if (vma && (vma->vm_flags & VM_SOFTDIRTY)) - flags2 = __PM_SOFT_DIRTY; - else - flags2 = 0; - pme = make_pme(PM_NOT_PRESENT(pm->v2) | PM_STATUS2(pm->v2, flags2)); + + while (1) { + /* End of address space hole, which we mark as non-present. */ + unsigned long hole_end; + + if (vma) + hole_end = min(end, vma->vm_start); + else + hole_end = end; + + for (; addr < hole_end; addr += PAGE_SIZE) { + pagemap_entry_t pme = make_pme(PM_NOT_PRESENT(pm->v2)); + + err = add_to_pagemap(addr, &pme, pm); + if (err) + return err; } - /* check that 'vma' actually covers this address, - * and that it isn't a huge page vma */ - if (vma && (vma->vm_start <= addr) && - !is_vm_hugetlb_page(vma)) { + if (!vma) + break; + /* + * We can't possibly be in a hugetlb VMA. In general, + * for a mm_walk with a pmd_entry and a hugetlb_entry, + * the pmd_entry can only be called on addresses in a + * hugetlb if the walk starts in a non-hugetlb VMA and + * spans a hugepage VMA. Since pagemap_read walks are + * PMD-sized and PMD-aligned, this will never be true. + */ + BUG_ON(is_vm_hugetlb_page(vma)); + + /* Addresses in the VMA. */ + for (; addr < min(end, vma->vm_end); addr += PAGE_SIZE) { + pagemap_entry_t pme; pte = pte_offset_map(pmd, addr); pte_to_pagemap_entry(&pme, pm, vma, addr, *pte); - /* unmap before userspace copy */ pte_unmap(pte); + err = add_to_pagemap(addr, &pme, pm); + if (err) + return err; } - err = add_to_pagemap(addr, &pme, pm); - if (err) - return err; + + if (addr == end) + break; + + vma = find_vma(walk->mm, addr); } cond_resched(); -- 2.1.0.rc2.206.gedb03e5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2] mm: softdirty: unmapped addresses between VMAs are clean 2014-09-15 18:40 ` [PATCH v2] " Peter Feiner @ 2014-09-26 20:33 ` Naoya Horiguchi 2014-10-02 0:25 ` Peter Feiner 0 siblings, 1 reply; 7+ messages in thread From: Naoya Horiguchi @ 2014-09-26 20:33 UTC (permalink / raw) To: Peter Feiner Cc: linux-mm, linux-kernel, Kirill A. Shutemov, Cyrill Gorcunov, Pavel Emelyanov, Jamie Liu, Hugh Dickins, Andrew Morton On Mon, Sep 15, 2014 at 11:40:38AM -0700, Peter Feiner wrote: > If a /proc/pid/pagemap read spans a [VMA, an unmapped region, then a > VM_SOFTDIRTY VMA], the virtual pages in the unmapped region are reported > as softdirty. Here's a program to demonstrate the bug: > > int main() { > const uint64_t PAGEMAP_SOFTDIRTY = 1ul << 55; > uint64_t pme[3]; > int fd = open("/proc/self/pagemap", O_RDONLY);; > char *m = mmap(NULL, 3 * getpagesize(), PROT_READ, > MAP_ANONYMOUS | MAP_SHARED, -1, 0); > munmap(m + getpagesize(), getpagesize()); > pread(fd, pme, 24, (unsigned long) m / getpagesize() * 8); > assert(pme[0] & PAGEMAP_SOFTDIRTY); /* passes */ > assert(!(pme[1] & PAGEMAP_SOFTDIRTY)); /* fails */ > assert(pme[2] & PAGEMAP_SOFTDIRTY); /* passes */ > return 0; > } > > (Note that all pages in new VMAs are softdirty until cleared). > > Tested: > Used the program given above. I'm going to include this code in > a selftest in the future. > > Signed-off-by: Peter Feiner <pfeiner@google.com> I triggered the BUG_ON(is_vm_hugetlb_page(vma)) introduced by this patch, when I simply read /proc/pid/pagemap of the process using hugetlb. This BUG_ON looks right itself, but find_vma() can find vmas beyond the pmd boundary, so checking the overrun is necessary. Could you test and merge the following change? --- From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Date: Fri, 26 Sep 2014 15:57:39 -0400 Subject: [PATCH] pagemap: prevent pagemap_pte_range() from overrunning When the vm_end address of the last vma just before vma(VM_HUGETLB) is not aligned to PMD boundary, the while loop in pagemap_pte_range() gets vma(VM_HUGETLB) and triggers BUG_ON(is_vm_hugetlb_page(vma)). This patch fixes it by checking the overrun. Fixes: 62c98294410d ("mm: softdirty: unmapped addresses between VMAs are clean") Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> --- fs/proc/task_mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 5674675adeae..f2b15da32a7f 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1083,7 +1083,7 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, return err; } - if (!vma) + if (!vma || vma->vm_start >= end) break; /* * We can't possibly be in a hugetlb VMA. In general, -- 1.9.3 Thanks, Naoya Horiguchi > --- > > v1 -> v2: > Restructured patch to make logic more clear. > --- > fs/proc/task_mmu.c | 61 +++++++++++++++++++++++++++++++++++------------------- > 1 file changed, 40 insertions(+), 21 deletions(-) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index dfc791c..2abf37b 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -1020,7 +1020,6 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, > spinlock_t *ptl; > pte_t *pte; > int err = 0; > - pagemap_entry_t pme = make_pme(PM_NOT_PRESENT(pm->v2)); > > /* find the first VMA at or above 'addr' */ > vma = find_vma(walk->mm, addr); > @@ -1034,6 +1033,7 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, > > for (; addr != end; addr += PAGE_SIZE) { > unsigned long offset; > + pagemap_entry_t pme; > > offset = (addr & ~PAGEMAP_WALK_MASK) >> > PAGE_SHIFT; > @@ -1048,32 +1048,51 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, > > if (pmd_trans_unstable(pmd)) > return 0; > - for (; addr != end; addr += PAGE_SIZE) { > - int flags2; > - > - /* check to see if we've left 'vma' behind > - * and need a new, higher one */ > - if (vma && (addr >= vma->vm_end)) { > - vma = find_vma(walk->mm, addr); > - if (vma && (vma->vm_flags & VM_SOFTDIRTY)) > - flags2 = __PM_SOFT_DIRTY; > - else > - flags2 = 0; > - pme = make_pme(PM_NOT_PRESENT(pm->v2) | PM_STATUS2(pm->v2, flags2)); > + > + while (1) { > + /* End of address space hole, which we mark as non-present. */ > + unsigned long hole_end; > + > + if (vma) > + hole_end = min(end, vma->vm_start); > + else > + hole_end = end; > + > + for (; addr < hole_end; addr += PAGE_SIZE) { > + pagemap_entry_t pme = make_pme(PM_NOT_PRESENT(pm->v2)); > + > + err = add_to_pagemap(addr, &pme, pm); > + if (err) > + return err; > } > > - /* check that 'vma' actually covers this address, > - * and that it isn't a huge page vma */ > - if (vma && (vma->vm_start <= addr) && > - !is_vm_hugetlb_page(vma)) { > + if (!vma) > + break; > + /* > + * We can't possibly be in a hugetlb VMA. In general, > + * for a mm_walk with a pmd_entry and a hugetlb_entry, > + * the pmd_entry can only be called on addresses in a > + * hugetlb if the walk starts in a non-hugetlb VMA and > + * spans a hugepage VMA. Since pagemap_read walks are > + * PMD-sized and PMD-aligned, this will never be true. > + */ > + BUG_ON(is_vm_hugetlb_page(vma)); > + > + /* Addresses in the VMA. */ > + for (; addr < min(end, vma->vm_end); addr += PAGE_SIZE) { > + pagemap_entry_t pme; > pte = pte_offset_map(pmd, addr); > pte_to_pagemap_entry(&pme, pm, vma, addr, *pte); > - /* unmap before userspace copy */ > pte_unmap(pte); > + err = add_to_pagemap(addr, &pme, pm); > + if (err) > + return err; > } > - err = add_to_pagemap(addr, &pme, pm); > - if (err) > - return err; > + > + if (addr == end) > + break; > + > + vma = find_vma(walk->mm, addr); > } > > cond_resched(); > -- > 2.1.0.rc2.206.gedb03e5 > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2] mm: softdirty: unmapped addresses between VMAs are clean 2014-09-26 20:33 ` Naoya Horiguchi @ 2014-10-02 0:25 ` Peter Feiner 0 siblings, 0 replies; 7+ messages in thread From: Peter Feiner @ 2014-10-02 0:25 UTC (permalink / raw) To: Naoya Horiguchi Cc: linux-mm, linux-kernel, Kirill A. Shutemov, Cyrill Gorcunov, Pavel Emelyanov, Jamie Liu, Hugh Dickins, Andrew Morton On Fri, Sep 26, 2014 at 04:33:26PM -0400, Naoya Horiguchi wrote: > Could you test and merge the following change? Many apologies for the late reply! Your email was in my spam folder :-( I see that Andrew has already merged the patch, so we're in good shape! Thanks for fixing this bug Naoya! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2014-10-02 0:25 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-09-10 23:24 [PATCH] mm: softdirty: unmapped addresses between VMAs are clean Peter Feiner 2014-09-10 23:36 ` Andrew Morton 2014-09-11 5:41 ` Peter Feiner 2014-09-11 19:54 ` Andrew Morton 2014-09-15 18:40 ` [PATCH v2] " Peter Feiner 2014-09-26 20:33 ` Naoya Horiguchi 2014-10-02 0:25 ` Peter Feiner
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).