From: Dave Hansen <dave@sr71.net>
To: linux-kernel@vger.kernel.org
Cc: x86@kernel.org, Dave Hansen <dave@sr71.net>, dave.hansen@linux.intel.com
Subject: [PATCH 19/37] x86, mm: simplify get_user_pages() PTE bit handling
Date: Mon, 16 Nov 2015 19:35:37 -0800 [thread overview]
Message-ID: <20151117033537.C2DBB366@viggo.jf.intel.com> (raw)
In-Reply-To: <20151117033511.BFFA1440@viggo.jf.intel.com>
From: Dave Hansen <dave.hansen@linux.intel.com>
The current get_user_pages() code is a wee bit more complicated
than it needs to be for pte bit checking. Currently, it establishes
a mask of required pte _PAGE_* bits and ensures that the pte it
goes after has all those bits.
This consolidates the three identical copies of this code.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
---
b/arch/x86/mm/gup.c | 46 +++++++++++++++++++++++++++++-----------------
1 file changed, 29 insertions(+), 17 deletions(-)
diff -puN arch/x86/mm/gup.c~pkeys-16-gup-swizzle arch/x86/mm/gup.c
--- a/arch/x86/mm/gup.c~pkeys-16-gup-swizzle 2015-11-16 12:35:43.690550937 -0800
+++ b/arch/x86/mm/gup.c 2015-11-16 12:35:43.693551073 -0800
@@ -63,6 +63,31 @@ retry:
#endif
}
+static inline int pte_allows_gup(pte_t pte, int write)
+{
+ /*
+ * 'entry_flags' can come from a pte, pmd or pud. Make
+ * sure to only check flags here that are valid *and* the
+ * same value on all 3 types. (PAT is currently the only
+ * one where that is true and is not checked here).
+ */
+ if (!(pte_flags(pte) & (_PAGE_PRESENT|_PAGE_USER)))
+ return 0;
+ if (write && !(pte_write(pte)))
+ return 0;
+ return 1;
+}
+
+static inline int pmd_allows_gup(pmd_t pmd, int write)
+{
+ return pte_allows_gup(*(pte_t *)&pmd, write);
+}
+
+static inline int pud_allows_gup(pud_t pud, int write)
+{
+ return pte_allows_gup(*(pte_t *)&pud, write);
+}
+
/*
* The performance critical leaf functions are made noinline otherwise gcc
* inlines everything into a single function which results in too much
@@ -71,13 +96,8 @@ retry:
static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
unsigned long end, int write, struct page **pages, int *nr)
{
- unsigned long mask;
pte_t *ptep;
- mask = _PAGE_PRESENT|_PAGE_USER;
- if (write)
- mask |= _PAGE_RW;
-
ptep = pte_offset_map(&pmd, addr);
do {
pte_t pte = gup_get_pte(ptep);
@@ -88,8 +108,8 @@ static noinline int gup_pte_range(pmd_t
pte_unmap(ptep);
return 0;
}
-
- if ((pte_flags(pte) & (mask | _PAGE_SPECIAL)) != mask) {
+ if (!pte_allows_gup(pte, write) ||
+ pte_special(pte)) {
pte_unmap(ptep);
return 0;
}
@@ -117,14 +137,10 @@ static inline void get_head_page_multipl
static noinline int gup_huge_pmd(pmd_t pmd, unsigned long addr,
unsigned long end, int write, struct page **pages, int *nr)
{
- unsigned long mask;
struct page *head, *page;
int refs;
- mask = _PAGE_PRESENT|_PAGE_USER;
- if (write)
- mask |= _PAGE_RW;
- if ((pmd_flags(pmd) & mask) != mask)
+ if (!pmd_allows_gup(pmd, write))
return 0;
/* hugepages are never "special" */
VM_BUG_ON(pmd_flags(pmd) & _PAGE_SPECIAL);
@@ -193,14 +209,10 @@ static int gup_pmd_range(pud_t pud, unsi
static noinline int gup_huge_pud(pud_t pud, unsigned long addr,
unsigned long end, int write, struct page **pages, int *nr)
{
- unsigned long mask;
struct page *head, *page;
int refs;
- mask = _PAGE_PRESENT|_PAGE_USER;
- if (write)
- mask |= _PAGE_RW;
- if ((pud_flags(pud) & mask) != mask)
+ if (!pud_allows_gup(pud, write))
return 0;
/* hugepages are never "special" */
VM_BUG_ON(pud_flags(pud) & _PAGE_SPECIAL);
_
next prev parent reply other threads:[~2015-11-17 3:36 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-17 3:35 [PATCH 00/37] x86: Memory Protection Keys Dave Hansen
2015-11-17 3:35 ` [PATCH 01/37] uprobes: dont pass around current->mm Dave Hansen
2015-11-17 3:35 ` [PATCH 02/37] mm, frame_vector: do not use get_user_pages_locked() Dave Hansen
2015-11-18 12:29 ` Jan Kara
2015-11-18 17:04 ` Andrea Arcangeli
2015-11-17 3:35 ` [PATCH 03/37] mm: kill get_user_pages_locked() Dave Hansen
2015-11-17 3:35 ` [PATCH 04/37] mm: simplify __get_user_pages() Dave Hansen
2015-11-17 3:35 ` [PATCH 05/37] mm, gup: introduce concept of "foreign" get_user_pages() Dave Hansen
2015-11-17 3:35 ` [PATCH 06/37] x86, fpu: add placeholder for Processor Trace XSAVE state Dave Hansen
2015-11-17 3:35 ` [PATCH 07/37] x86, pkeys: Add Kconfig option Dave Hansen
2015-11-17 3:35 ` [PATCH 08/37] x86, pkeys: cpuid bit definition Dave Hansen
2015-11-17 3:35 ` [PATCH 09/37] x86, pkeys: define new CR4 bit Dave Hansen
2015-11-17 3:35 ` [PATCH 10/37] x86, pkeys: add PKRU xsave fields and data structure(s) Dave Hansen
2015-11-27 9:23 ` Thomas Gleixner
2015-11-17 3:35 ` [PATCH 11/37] x86, pkeys: PTE bits for storing protection key Dave Hansen
2015-11-17 3:35 ` [PATCH 12/37] x86, pkeys: new page fault error code bit: PF_PK Dave Hansen
2015-11-17 3:35 ` [PATCH 13/37] x86, pkeys: store protection in high VMA flags Dave Hansen
2015-11-17 3:35 ` [PATCH 14/37] x86, pkeys: arch-specific protection bits Dave Hansen
2015-11-17 3:35 ` [PATCH 15/37] x86, pkeys: pass VMA down in to fault signal generation code Dave Hansen
2015-11-27 9:30 ` Thomas Gleixner
2015-11-17 3:35 ` [PATCH 16/37] x86, pkeys: notify userspace about protection key faults Dave Hansen
2015-11-27 9:49 ` Thomas Gleixner
2015-11-17 3:35 ` [PATCH 17/37] x86, pkeys: add functions to fetch PKRU Dave Hansen
2015-11-27 9:51 ` Thomas Gleixner
2015-11-30 15:51 ` Dave Hansen
2015-11-17 3:35 ` [PATCH 18/37] mm: factor out VMA fault permission checking Dave Hansen
2015-11-27 9:53 ` Thomas Gleixner
2015-11-17 3:35 ` Dave Hansen [this message]
2015-11-27 10:12 ` [PATCH 19/37] x86, mm: simplify get_user_pages() PTE bit handling Thomas Gleixner
2015-11-30 16:25 ` Dave Hansen
2015-11-17 3:35 ` [PATCH 20/37] x86, pkeys: check VMAs and PTEs for protection keys Dave Hansen
2015-11-17 3:35 ` [PATCH 21/37] mm: add gup flag to indicate "foreign" mm access Dave Hansen
2015-11-17 3:35 ` [PATCH 22/37] x86, pkeys: optimize fault handling in access_error() Dave Hansen
2015-11-17 3:35 ` [PATCH 23/37] x86, pkeys: differentiate instruction fetches Dave Hansen
2015-11-17 3:35 ` [PATCH 24/37] x86, pkeys: dump PKRU with other kernel registers Dave Hansen
2015-11-17 3:35 ` [PATCH 25/37] x86, pkeys: dump PTE pkey in /proc/pid/smaps Dave Hansen
2015-11-17 3:35 ` [PATCH 26/37] x86, pkeys: add Kconfig prompt to existing config option Dave Hansen
2015-11-17 3:35 ` [PATCH 27/37] mm, multi-arch: pass a protection key in to calc_vm_flag_bits() Dave Hansen
2015-11-17 3:35 ` [PATCH 28/37] x86, pkeys: add arch_validate_pkey() Dave Hansen
2015-11-17 3:35 ` [PATCH 29/37] mm: implement new mprotect_key() system call Dave Hansen
2015-11-17 3:35 ` [PATCH 30/37] x86, pkeys: make mprotect_key() mask off additional vm_flags Dave Hansen
2015-11-17 3:35 ` [PATCH 31/37] x86: wire up mprotect_key() system call Dave Hansen
2015-11-17 3:35 ` [PATCH 32/37] x86: separate out LDT init from context init Dave Hansen
2015-11-17 3:35 ` [PATCH 33/37] x86, fpu: allow setting of XSAVE state Dave Hansen
2015-11-17 3:35 ` [PATCH 34/37] x86, pkeys: allocation/free syscalls Dave Hansen
2015-11-17 3:36 ` [PATCH 35/37] x86, pkeys: add pkey set/get syscalls Dave Hansen
2015-11-17 3:36 ` [PATCH 36/37] x86, pkeys: actually enable Memory Protection Keys in CPU Dave Hansen
2015-11-17 3:36 ` [PATCH 37/37] x86, pkeys: Documentation Dave Hansen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151117033537.C2DBB366@viggo.jf.intel.com \
--to=dave@sr71.net \
--cc=dave.hansen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox