From: Will Deacon <will.deacon@arm.com>
To: David Miller <davem@davemloft.net>
Cc: "akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>,
"mhocko@suse.cz" <mhocko@suse.cz>,
"kirill@shutemov.name" <kirill@shutemov.name>,
"aarcange@redhat.com" <aarcange@redhat.com>,
"cmetcalf@tilera.com" <cmetcalf@tilera.com>,
Steve Capper <Steve.Capper@arm.com>
Subject: Re: [PATCH v2] mm: thp: Set the accessed flag for old pages on access fault.
Date: Wed, 17 Oct 2012 16:54:02 +0100 [thread overview]
Message-ID: <20121017155401.GJ5973@mudshark.cambridge.arm.com> (raw)
In-Reply-To: <20121017.112620.1865348978594874782.davem@davemloft.net>
On Wed, Oct 17, 2012 at 04:26:20PM +0100, David Miller wrote:
> From: Will Deacon <will.deacon@arm.com>
> Date: Wed, 17 Oct 2012 14:01:25 +0100
>
> > + update_mmu_cache(vma, address, pmd);
>
> This won't build, use update_mmu_cache_pmd().
Good catch. They're both empty macros on ARM, so the typechecker didn't spot
it. Updated patch below.
Cheers,
Will
--->8
From a548127ac12c14c178d4d817fa454baec9043d89 Mon Sep 17 00:00:00 2001
From: Will Deacon <will.deacon@arm.com>
Date: Tue, 2 Oct 2012 11:18:52 +0100
Subject: [PATCH] mm: thp: Set the accessed flag for old pages on access fault.
On x86 memory accesses to pages without the ACCESSED flag set result in the
ACCESSED flag being set automatically. With the ARM architecture a page access
fault is raised instead (and it will continue to be raised until the ACCESSED
flag is set for the appropriate PTE/PMD).
For normal memory pages, handle_pte_fault will call pte_mkyoung (effectively
setting the ACCESSED flag). For transparent huge pages, pmd_mkyoung will only
be called for a write fault.
This patch ensures that faults on transparent hugepages which do not result
in a CoW update the access flags for the faulting pmd.
Cc: Chris Metcalf <cmetcalf@tilera.com>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
include/linux/huge_mm.h | 2 ++
mm/huge_memory.c | 8 ++++++++
mm/memory.c | 9 ++++++++-
3 files changed, 18 insertions(+), 1 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index b31cb7d..62a0d5a 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -8,6 +8,8 @@ extern int do_huge_pmd_anonymous_page(struct mm_struct *mm,
extern int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
struct vm_area_struct *vma);
+extern void huge_pmd_set_accessed(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmd, int dirty);
extern int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, pmd_t *pmd,
pmd_t orig_pmd);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a863af2..d9aa402 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -878,6 +878,14 @@ out_free_pages:
goto out;
}
+void huge_pmd_set_accessed(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, int dirty)
+{
+ pmd_t entry = pmd_mkyoung(*pmd);
+ if (pmdp_set_access_flags(vma, address & HPAGE_PMD_MASK, pmd, entry, dirty))
+ update_mmu_cache_pmd(vma, address, pmd);
+}
+
int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, pmd_t *pmd, pmd_t orig_pmd)
{
diff --git a/mm/memory.c b/mm/memory.c
index fb135ba..c55c17c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3539,7 +3539,8 @@ retry:
barrier();
if (pmd_trans_huge(orig_pmd)) {
- if (flags & FAULT_FLAG_WRITE &&
+ unsigned int dirty = flags & FAULT_FLAG_WRITE;
+ if (dirty &&
!pmd_write(orig_pmd) &&
!pmd_trans_splitting(orig_pmd)) {
ret = do_huge_pmd_wp_page(mm, vma, address, pmd,
@@ -3552,7 +3553,13 @@ retry:
if (unlikely(ret & VM_FAULT_OOM))
goto retry;
return ret;
+ } else if (pmd_trans_huge_lock(pmd, vma) == 1) {
+ if (likely(pmd_same(*pmd, orig_pmd)))
+ huge_pmd_set_accessed(vma, address, pmd,
+ dirty);
+ spin_unlock(&mm->page_table_lock);
}
+
return 0;
}
}
--
1.7.4.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
WARNING: multiple messages have this Message-ID (diff)
From: Will Deacon <will.deacon@arm.com>
To: David Miller <davem@davemloft.net>
Cc: "akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>,
"mhocko@suse.cz" <mhocko@suse.cz>,
"kirill@shutemov.name" <kirill@shutemov.name>,
"aarcange@redhat.com" <aarcange@redhat.com>,
"cmetcalf@tilera.com" <cmetcalf@tilera.com>,
Steve Capper <Steve.Capper@arm.com>
Subject: Re: [PATCH v2] mm: thp: Set the accessed flag for old pages on access fault.
Date: Wed, 17 Oct 2012 16:54:02 +0100 [thread overview]
Message-ID: <20121017155401.GJ5973@mudshark.cambridge.arm.com> (raw)
Message-ID: <20121017155402.w7LMMoBgzYvuzJc2d3rbo6LRt2CbTVGmdp9tZ9VkPLs@z> (raw)
In-Reply-To: <20121017.112620.1865348978594874782.davem@davemloft.net>
On Wed, Oct 17, 2012 at 04:26:20PM +0100, David Miller wrote:
> From: Will Deacon <will.deacon@arm.com>
> Date: Wed, 17 Oct 2012 14:01:25 +0100
>
> > + update_mmu_cache(vma, address, pmd);
>
> This won't build, use update_mmu_cache_pmd().
Good catch. They're both empty macros on ARM, so the typechecker didn't spot
it. Updated patch below.
Cheers,
Will
--->8
From a548127ac12c14c178d4d817fa454baec9043d89 Mon Sep 17 00:00:00 2001
From: Will Deacon <will.deacon@arm.com>
Date: Tue, 2 Oct 2012 11:18:52 +0100
Subject: [PATCH] mm: thp: Set the accessed flag for old pages on access fault.
On x86 memory accesses to pages without the ACCESSED flag set result in the
ACCESSED flag being set automatically. With the ARM architecture a page access
fault is raised instead (and it will continue to be raised until the ACCESSED
flag is set for the appropriate PTE/PMD).
For normal memory pages, handle_pte_fault will call pte_mkyoung (effectively
setting the ACCESSED flag). For transparent huge pages, pmd_mkyoung will only
be called for a write fault.
This patch ensures that faults on transparent hugepages which do not result
in a CoW update the access flags for the faulting pmd.
Cc: Chris Metcalf <cmetcalf@tilera.com>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
include/linux/huge_mm.h | 2 ++
mm/huge_memory.c | 8 ++++++++
mm/memory.c | 9 ++++++++-
3 files changed, 18 insertions(+), 1 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index b31cb7d..62a0d5a 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -8,6 +8,8 @@ extern int do_huge_pmd_anonymous_page(struct mm_struct *mm,
extern int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
struct vm_area_struct *vma);
+extern void huge_pmd_set_accessed(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmd, int dirty);
extern int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, pmd_t *pmd,
pmd_t orig_pmd);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a863af2..d9aa402 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -878,6 +878,14 @@ out_free_pages:
goto out;
}
+void huge_pmd_set_accessed(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmd, int dirty)
+{
+ pmd_t entry = pmd_mkyoung(*pmd);
+ if (pmdp_set_access_flags(vma, address & HPAGE_PMD_MASK, pmd, entry, dirty))
+ update_mmu_cache_pmd(vma, address, pmd);
+}
+
int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, pmd_t *pmd, pmd_t orig_pmd)
{
diff --git a/mm/memory.c b/mm/memory.c
index fb135ba..c55c17c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3539,7 +3539,8 @@ retry:
barrier();
if (pmd_trans_huge(orig_pmd)) {
- if (flags & FAULT_FLAG_WRITE &&
+ unsigned int dirty = flags & FAULT_FLAG_WRITE;
+ if (dirty &&
!pmd_write(orig_pmd) &&
!pmd_trans_splitting(orig_pmd)) {
ret = do_huge_pmd_wp_page(mm, vma, address, pmd,
@@ -3552,7 +3553,13 @@ retry:
if (unlikely(ret & VM_FAULT_OOM))
goto retry;
return ret;
+ } else if (pmd_trans_huge_lock(pmd, vma) == 1) {
+ if (likely(pmd_same(*pmd, orig_pmd)))
+ huge_pmd_set_accessed(vma, address, pmd,
+ dirty);
+ spin_unlock(&mm->page_table_lock);
}
+
return 0;
}
}
--
1.7.4.1
next prev parent reply other threads:[~2012-10-17 15:54 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-02 16:59 [PATCH v2] mm: thp: Set the accessed flag for old pages on access fault Will Deacon
2012-10-02 16:59 ` Will Deacon
2012-10-02 17:46 ` Andrea Arcangeli
2012-10-02 17:46 ` Andrea Arcangeli
2012-10-02 22:01 ` Andrew Morton
2012-10-02 22:01 ` Andrew Morton
2012-10-03 9:09 ` Will Deacon
2012-10-03 9:09 ` Will Deacon
2012-10-17 13:01 ` Will Deacon
2012-10-17 13:01 ` Will Deacon
2012-10-17 15:26 ` David Miller
2012-10-17 15:54 ` Will Deacon [this message]
2012-10-17 15:54 ` Will Deacon
2012-10-18 22:05 ` Andrew Morton
2012-10-18 22:05 ` Andrew Morton
2012-10-19 9:10 ` Will Deacon
2012-10-19 9:10 ` Will Deacon
2012-10-19 18:49 ` Andrew Morton
2012-10-19 18:49 ` Andrew Morton
2012-10-22 10:35 ` Will Deacon
2012-10-22 10:35 ` Will Deacon
2012-10-22 18:18 ` Andrew Morton
2012-10-23 10:11 ` Will Deacon
2012-10-23 10:11 ` Will Deacon
2012-10-23 21:50 ` Andrew Morton
2012-10-24 9:35 ` Will Deacon
2012-10-24 9:35 ` Will Deacon
2012-10-04 22:13 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121017155401.GJ5973@mudshark.cambridge.arm.com \
--to=will.deacon@arm.com \
--cc=Steve.Capper@arm.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cmetcalf@tilera.com \
--cc=davem@davemloft.net \
--cc=kirill@shutemov.name \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).