From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Kirill A. Shutemov" Subject: [PATCH 3/7] mm/gup: Move page table entry dereference into helper Date: Thu, 16 Mar 2017 18:26:51 +0300 Message-ID: <20170316152655.37789-4-kirill.shutemov@linux.intel.com> References: <20170316152655.37789-1-kirill.shutemov@linux.intel.com> Return-path: In-Reply-To: <20170316152655.37789-1-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org To: Linus Torvalds , Andrew Morton , x86@kernel.org, Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" Cc: Dave Hansen , "Aneesh Kumar K . V" , Steve Capper , Dann Frazier , Catalin Marinas , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" List-Id: linux-arch.vger.kernel.org This is preparation patch for transition of x86 to generic GUP_fast() implementation. On x86 PAE, page table entry is larger than sizeof(long) and we would need to provide helper that can read the entry atomically. Signed-off-by: Kirill A. Shutemov --- mm/gup.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index a62a778ce4ec..ed2259dc4606 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1189,6 +1189,17 @@ struct page *get_dump_page(unsigned long addr) */ #ifdef CONFIG_HAVE_GENERIC_RCU_GUP +#ifndef gup_get_pte +/* + * We assume that the pte can be read atomically. If this is not the case for + * your architecture, please provide the helper. + */ +static inline pte_t gup_get_pte(pte_t *ptep) +{ + return READ_ONCE(*ptep); +} +#endif + #ifdef __HAVE_ARCH_PTE_SPECIAL static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, int write, struct page **pages, int *nr) @@ -1198,14 +1209,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, ptem = ptep = pte_offset_map(&pmd, addr); do { - /* - * In the line below we are assuming that the pte can be read - * atomically. If this is not the case for your architecture, - * please wrap this in a helper function! - * - * for an example see gup_get_pte in arch/x86/mm/gup.c - */ - pte_t pte = READ_ONCE(*ptep); + pte_t pte = gup_get_pte(ptep); struct page *head, *page; /* -- 2.11.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com ([192.55.52.115]:19935 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753851AbdCPP1m (ORCPT ); Thu, 16 Mar 2017 11:27:42 -0400 From: "Kirill A. Shutemov" Subject: [PATCH 3/7] mm/gup: Move page table entry dereference into helper Date: Thu, 16 Mar 2017 18:26:51 +0300 Message-ID: <20170316152655.37789-4-kirill.shutemov@linux.intel.com> In-Reply-To: <20170316152655.37789-1-kirill.shutemov@linux.intel.com> References: <20170316152655.37789-1-kirill.shutemov@linux.intel.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Linus Torvalds , Andrew Morton , x86@kernel.org, Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" Cc: Dave Hansen , "Aneesh Kumar K . V" , Steve Capper , Dann Frazier , Catalin Marinas , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Message-ID: <20170316152651.XxJVqle56Dl-DsOF6Bl51QEB3AEWfryVQrO2a0A2udE@z> This is preparation patch for transition of x86 to generic GUP_fast() implementation. On x86 PAE, page table entry is larger than sizeof(long) and we would need to provide helper that can read the entry atomically. Signed-off-by: Kirill A. Shutemov --- mm/gup.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index a62a778ce4ec..ed2259dc4606 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1189,6 +1189,17 @@ struct page *get_dump_page(unsigned long addr) */ #ifdef CONFIG_HAVE_GENERIC_RCU_GUP +#ifndef gup_get_pte +/* + * We assume that the pte can be read atomically. If this is not the case for + * your architecture, please provide the helper. + */ +static inline pte_t gup_get_pte(pte_t *ptep) +{ + return READ_ONCE(*ptep); +} +#endif + #ifdef __HAVE_ARCH_PTE_SPECIAL static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, int write, struct page **pages, int *nr) @@ -1198,14 +1209,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, ptem = ptep = pte_offset_map(&pmd, addr); do { - /* - * In the line below we are assuming that the pte can be read - * atomically. If this is not the case for your architecture, - * please wrap this in a helper function! - * - * for an example see gup_get_pte in arch/x86/mm/gup.c - */ - pte_t pte = READ_ONCE(*ptep); + pte_t pte = gup_get_pte(ptep); struct page *head, *page; /* -- 2.11.0