From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1948100AbcBRUXA (ORCPT ); Thu, 18 Feb 2016 15:23:00 -0500 Received: from terminus.zytor.com ([198.137.202.10]:47030 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1947698AbcBRUW5 (ORCPT ); Thu, 18 Feb 2016 15:22:57 -0500 Date: Thu, 18 Feb 2016 12:21:50 -0800 From: tip-bot for Dave Hansen Message-ID: Cc: tglx@linutronix.de, linux-kernel@vger.kernel.org, mingo@kernel.org, riel@redhat.com, brgerst@gmail.com, peterz@infradead.org, bp@alien8.de, dave.hansen@linux.intel.com, luto@amacapital.net, dvlasenk@redhat.com, hpa@zytor.com, torvalds@linux-foundation.org, dave@sr71.net, akpm@linux-foundation.org Reply-To: peterz@infradead.org, bp@alien8.de, luto@amacapital.net, dave.hansen@linux.intel.com, tglx@linutronix.de, linux-kernel@vger.kernel.org, mingo@kernel.org, riel@redhat.com, brgerst@gmail.com, torvalds@linux-foundation.org, dave@sr71.net, akpm@linux-foundation.org, dvlasenk@redhat.com, hpa@zytor.com In-Reply-To: <20160212210218.3A2D4045@viggo.jf.intel.com> References: <20160212210218.3A2D4045@viggo.jf.intel.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:mm/pkeys] x86/mm/gup: Simplify get_user_pages() PTE bit handling Git-Commit-ID: 1874f6895c92d991ccf85edcc55a0d9dd552d71c X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 1874f6895c92d991ccf85edcc55a0d9dd552d71c Gitweb: http://git.kernel.org/tip/1874f6895c92d991ccf85edcc55a0d9dd552d71c Author: Dave Hansen AuthorDate: Fri, 12 Feb 2016 13:02:18 -0800 Committer: Ingo Molnar CommitDate: Thu, 18 Feb 2016 09:32:44 +0100 x86/mm/gup: Simplify get_user_pages() PTE bit handling The current get_user_pages() code is a wee bit more complicated than it needs to be for pte bit checking. Currently, it establishes a mask of required pte _PAGE_* bits and ensures that the pte it goes after has all those bits. This consolidates the three identical copies of this code. Signed-off-by: Dave Hansen Reviewed-by: Thomas Gleixner Cc: Andrew Morton Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Brian Gerst Cc: Dave Hansen Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Rik van Riel Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20160212210218.3A2D4045@viggo.jf.intel.com Signed-off-by: Ingo Molnar --- arch/x86/mm/gup.c | 38 ++++++++++++++++++++++---------------- 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c index ce5e454..2f0a329 100644 --- a/arch/x86/mm/gup.c +++ b/arch/x86/mm/gup.c @@ -75,6 +75,24 @@ static void undo_dev_pagemap(int *nr, int nr_start, struct page **pages) } /* + * 'pteval' can come from a pte, pmd or pud. We only check + * _PAGE_PRESENT, _PAGE_USER, and _PAGE_RW in here which are the + * same value on all 3 types. + */ +static inline int pte_allows_gup(unsigned long pteval, int write) +{ + unsigned long need_pte_bits = _PAGE_PRESENT|_PAGE_USER; + + if (write) + need_pte_bits |= _PAGE_RW; + + if ((pteval & need_pte_bits) != need_pte_bits) + return 0; + + return 1; +} + +/* * The performance critical leaf functions are made noinline otherwise gcc * inlines everything into a single function which results in too much * register pressure. @@ -83,14 +101,9 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, int write, struct page **pages, int *nr) { struct dev_pagemap *pgmap = NULL; - unsigned long mask; int nr_start = *nr; pte_t *ptep; - mask = _PAGE_PRESENT|_PAGE_USER; - if (write) - mask |= _PAGE_RW; - ptep = pte_offset_map(&pmd, addr); do { pte_t pte = gup_get_pte(ptep); @@ -110,7 +123,8 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr, pte_unmap(ptep); return 0; } - } else if ((pte_flags(pte) & (mask | _PAGE_SPECIAL)) != mask) { + } else if (!pte_allows_gup(pte_val(pte), write) || + pte_special(pte)) { pte_unmap(ptep); return 0; } @@ -164,14 +178,10 @@ static int __gup_device_huge_pmd(pmd_t pmd, unsigned long addr, static noinline int gup_huge_pmd(pmd_t pmd, unsigned long addr, unsigned long end, int write, struct page **pages, int *nr) { - unsigned long mask; struct page *head, *page; int refs; - mask = _PAGE_PRESENT|_PAGE_USER; - if (write) - mask |= _PAGE_RW; - if ((pmd_flags(pmd) & mask) != mask) + if (!pte_allows_gup(pmd_val(pmd), write)) return 0; VM_BUG_ON(!pfn_valid(pmd_pfn(pmd))); @@ -231,14 +241,10 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, static noinline int gup_huge_pud(pud_t pud, unsigned long addr, unsigned long end, int write, struct page **pages, int *nr) { - unsigned long mask; struct page *head, *page; int refs; - mask = _PAGE_PRESENT|_PAGE_USER; - if (write) - mask |= _PAGE_RW; - if ((pud_flags(pud) & mask) != mask) + if (!pte_allows_gup(pud_val(pud), write)) return 0; /* hugepages are never "special" */ VM_BUG_ON(pud_flags(pud) & _PAGE_SPECIAL);