From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3tNHxb05sLzDw2Z for ; Tue, 22 Nov 2016 19:02:10 +1100 (AEDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.17/8.16.0.17) with SMTP id uAM7wtkE135459 for ; Tue, 22 Nov 2016 03:02:08 -0500 Received: from e18.ny.us.ibm.com (e18.ny.us.ibm.com [129.33.205.208]) by mx0b-001b2d01.pphosted.com with ESMTP id 26vfqrf657-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Tue, 22 Nov 2016 03:02:08 -0500 Received: from localhost by e18.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 22 Nov 2016 03:02:07 -0500 From: "Aneesh Kumar K.V" To: benh@kernel.crashing.org, paulus@samba.org, mpe@ellerman.id.au Cc: linuxppc-dev@lists.ozlabs.org, "Aneesh Kumar K.V" Subject: [PATCH v4 5/7] powerpc/mm/hugetlb: Switch hugetlb update to use huge_pte_update Date: Tue, 22 Nov 2016 13:31:47 +0530 In-Reply-To: <20161122080149.31306-1-aneesh.kumar@linux.vnet.ibm.com> References: <20161122080149.31306-1-aneesh.kumar@linux.vnet.ibm.com> Message-Id: <20161122080149.31306-5-aneesh.kumar@linux.vnet.ibm.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , We want to switch pte_update to use va based tlb flush. In order to do that we need to track the page size. With hugetlb we currently don't have page size available in these functions. Hence switch hugetlb to use seperate functions for update. In later patch we will update hugetlb functions to take vm_area_struct from which we can derive the page size. After that we will switch this back to use pte_update Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/include/asm/book3s/64/hugetlb.h | 43 +++++++++++++++++++++++++++- arch/powerpc/include/asm/book3s/64/pgtable.h | 9 ------ 2 files changed, 42 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h index 8fc04d2ac86f..586236625117 100644 --- a/arch/powerpc/include/asm/book3s/64/hugetlb.h +++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h @@ -31,9 +31,50 @@ static inline int hstate_get_psize(struct hstate *hstate) } } +static inline unsigned long huge_pte_update(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned long clr, + unsigned long set) +{ + if (radix_enabled()) { + unsigned long old_pte; + + if (cpu_has_feature(CPU_FTR_POWER9_DD1)) { + + unsigned long new_pte; + + old_pte = __radix_pte_update(ptep, ~0, 0); + asm volatile("ptesync" : : : "memory"); + /* + * new value of pte + */ + new_pte = (old_pte | set) & ~clr; + /* + * For now let's do heavy pid flush + * radix__flush_tlb_page_psize(mm, addr, mmu_virtual_psize); + */ + radix__flush_tlb_mm(mm); + + __radix_pte_update(ptep, 0, new_pte); + } else + old_pte = __radix_pte_update(ptep, clr, set); + asm volatile("ptesync" : : : "memory"); + return old_pte; + } + return hash__pte_update(mm, addr, ptep, clr, set, true); +} + +static inline void huge_ptep_set_wrprotect(struct mm_struct *mm, + unsigned long addr, pte_t *ptep) +{ + if ((pte_raw(*ptep) & cpu_to_be64(_PAGE_WRITE)) == 0) + return; + + huge_pte_update(mm, addr, ptep, _PAGE_WRITE, 0); +} + static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - return __pte(pte_update(mm, addr, ptep, ~0UL, 0, 1)); + return __pte(huge_pte_update(mm, addr, ptep, ~0UL, 0)); } #endif diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 46d739457d68..ef2eef1ba99a 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -346,15 +346,6 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_update(mm, addr, ptep, _PAGE_WRITE, 0, 0); } -static inline void huge_ptep_set_wrprotect(struct mm_struct *mm, - unsigned long addr, pte_t *ptep) -{ - if ((pte_raw(*ptep) & cpu_to_be64(_PAGE_WRITE)) == 0) - return; - - pte_update(mm, addr, ptep, _PAGE_WRITE, 0, 1); -} - #define __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) -- 2.10.2