From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750927AbdCNJkS (ORCPT ); Tue, 14 Mar 2017 05:40:18 -0400 Received: from terminus.zytor.com ([65.50.211.136]:46242 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750885AbdCNJkQ (ORCPT ); Tue, 14 Mar 2017 05:40:16 -0400 Date: Tue, 14 Mar 2017 02:38:18 -0700 From: "tip-bot for Kirill A. Shutemov" Message-ID: Cc: mhocko@suse.com, jpoimboe@redhat.com, mingo@kernel.org, luto@kernel.org, kirill.shutemov@linux.intel.com, dvlasenk@redhat.com, hpa@zytor.com, tglx@linutronix.de, dave.hansen@intel.com, akpm@linux-foundation.org, peterz@infradead.org, linux-kernel@vger.kernel.org, bp@alien8.de, torvalds@linux-foundation.org, brgerst@gmail.com, arnd@arndb.de Reply-To: brgerst@gmail.com, arnd@arndb.de, torvalds@linux-foundation.org, bp@alien8.de, linux-kernel@vger.kernel.org, peterz@infradead.org, akpm@linux-foundation.org, dave.hansen@intel.com, tglx@linutronix.de, hpa@zytor.com, dvlasenk@redhat.com, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@kernel.org, jpoimboe@redhat.com, mhocko@suse.com In-Reply-To: <20170313143309.16020-4-kirill.shutemov@linux.intel.com> References: <20170313143309.16020-4-kirill.shutemov@linux.intel.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:x86/mm] x86/mm/gup: Add 5-level paging support Git-Commit-ID: 0318e5abe1c0933b8bf6763a1a0d3caec4f0826d X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 0318e5abe1c0933b8bf6763a1a0d3caec4f0826d Gitweb: http://git.kernel.org/tip/0318e5abe1c0933b8bf6763a1a0d3caec4f0826d Author: Kirill A. Shutemov AuthorDate: Mon, 13 Mar 2017 17:33:06 +0300 Committer: Ingo Molnar CommitDate: Tue, 14 Mar 2017 08:45:08 +0100 x86/mm/gup: Add 5-level paging support Extend get_user_pages_fast() to handle an additional page table level. Signed-off-by: Kirill A. Shutemov Cc: Andrew Morton Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Brian Gerst Cc: Dave Hansen Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Josh Poimboeuf Cc: Linus Torvalds Cc: Michal Hocko Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: linux-arch@vger.kernel.org Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20170313143309.16020-4-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar --- arch/x86/mm/gup.c | 33 +++++++++++++++++++++++++++------ 1 file changed, 27 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c index 1f3b6ef..456dfdf 100644 --- a/arch/x86/mm/gup.c +++ b/arch/x86/mm/gup.c @@ -76,9 +76,9 @@ static void undo_dev_pagemap(int *nr, int nr_start, struct page **pages) } /* - * 'pteval' can come from a pte, pmd or pud. We only check + * 'pteval' can come from a pte, pmd, pud or p4d. We only check * _PAGE_PRESENT, _PAGE_USER, and _PAGE_RW in here which are the - * same value on all 3 types. + * same value on all 4 types. */ static inline int pte_allows_gup(unsigned long pteval, int write) { @@ -295,13 +295,13 @@ static noinline int gup_huge_pud(pud_t pud, unsigned long addr, return 1; } -static int gup_pud_range(pgd_t pgd, unsigned long addr, unsigned long end, +static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end, int write, struct page **pages, int *nr) { unsigned long next; pud_t *pudp; - pudp = pud_offset(&pgd, addr); + pudp = pud_offset(&p4d, addr); do { pud_t pud = *pudp; @@ -320,6 +320,27 @@ static int gup_pud_range(pgd_t pgd, unsigned long addr, unsigned long end, return 1; } +static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end, + int write, struct page **pages, int *nr) +{ + unsigned long next; + p4d_t *p4dp; + + p4dp = p4d_offset(&pgd, addr); + do { + p4d_t p4d = *p4dp; + + next = p4d_addr_end(addr, end); + if (p4d_none(p4d)) + return 0; + BUILD_BUG_ON(p4d_large(p4d)); + if (!gup_pud_range(p4d, addr, next, write, pages, nr)) + return 0; + } while (p4dp++, addr = next, addr != end); + + return 1; +} + /* * Like get_user_pages_fast() except its IRQ-safe in that it won't fall * back to the regular GUP. @@ -368,7 +389,7 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, next = pgd_addr_end(addr, end); if (pgd_none(pgd)) break; - if (!gup_pud_range(pgd, addr, next, write, pages, &nr)) + if (!gup_p4d_range(pgd, addr, next, write, pages, &nr)) break; } while (pgdp++, addr = next, addr != end); local_irq_restore(flags); @@ -440,7 +461,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write, next = pgd_addr_end(addr, end); if (pgd_none(pgd)) goto slow; - if (!gup_pud_range(pgd, addr, next, write, pages, &nr)) + if (!gup_p4d_range(pgd, addr, next, write, pages, &nr)) goto slow; } while (pgdp++, addr = next, addr != end); local_irq_enable();