From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4812C4360C for ; Thu, 3 Oct 2019 01:34:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 53F1B222BE for ; Thu, 3 Oct 2019 01:34:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 53F1B222BE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 991676B0005; Wed, 2 Oct 2019 21:34:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 942CD6B0006; Wed, 2 Oct 2019 21:34:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 811EE6B0007; Wed, 2 Oct 2019 21:34:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 5AE7C6B0005 for ; Wed, 2 Oct 2019 21:34:48 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id E8B3D68AE for ; Thu, 3 Oct 2019 01:34:47 +0000 (UTC) X-FDA: 76000754214.16.part01_46e6949342c4e X-HE-Tag: part01_46e6949342c4e X-Filterd-Recvd-Size: 12496 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Thu, 3 Oct 2019 01:34:47 +0000 (UTC) Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x931WX6a005532; Wed, 2 Oct 2019 21:34:20 -0400 Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 2vd2vkem8y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 02 Oct 2019 21:34:20 -0400 Received: from m0098396.ppops.net (m0098396.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.27/8.16.0.27) with SMTP id x931XqCh008299; Wed, 2 Oct 2019 21:34:19 -0400 Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0a-001b2d01.pphosted.com with ESMTP id 2vd2vkem8g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 02 Oct 2019 21:34:19 -0400 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id x931UqFW022502; Thu, 3 Oct 2019 01:34:18 GMT Received: from b03cxnp07029.gho.boulder.ibm.com (b03cxnp07029.gho.boulder.ibm.com [9.17.130.16]) by ppma02dal.us.ibm.com with ESMTP id 2v9y582mv7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 03 Oct 2019 01:34:18 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp07029.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x931YGel37683522 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 3 Oct 2019 01:34:16 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 493EF7805C; Thu, 3 Oct 2019 01:34:16 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 663BE7805E; Thu, 3 Oct 2019 01:34:04 +0000 (GMT) Received: from LeoBras.ibmuc.com (unknown [9.85.174.224]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Thu, 3 Oct 2019 01:34:04 +0000 (GMT) From: Leonardo Bras To: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org Cc: Leonardo Bras , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Arnd Bergmann , "Aneesh Kumar K.V" , Christophe Leroy , Nicholas Piggin , Andrew Morton , Mahesh Salgaonkar , Reza Arbab , Santosh Sivaraj , Balbir Singh , Thomas Gleixner , Greg Kroah-Hartman , Mike Rapoport , Allison Randal , Jason Gunthorpe , Dan Williams , Vlastimil Babka , Christoph Lameter , Logan Gunthorpe , Andrey Ryabinin , Alexey Dobriyan , Souptick Joarder , Mathieu Desnoyers , Ralph Campbell , Jesper Dangaard Brouer , Jann Horn , Davidlohr Bueso , "Peter Zijlstra (Intel)" , Ingo Molnar , Christian Brauner , Michal Hocko , Elena Reshetova , Roman Gushchin , Andrea Arcangeli , Al Viro , "Dmitry V. Levin" , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Song Liu , Bartlomiej Zolnierkiewicz , Ira Weiny , "Kirill A. Shutemov" , John Hubbard , Keith Busch Subject: [PATCH v5 02/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks Date: Wed, 2 Oct 2019 22:33:16 -0300 Message-Id: <20191003013325.2614-3-leonardo@linux.ibm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191003013325.2614-1-leonardo@linux.ibm.com> References: <20191003013325.2614-1-leonardo@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-10-03_01:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1908290000 definitions=main-1910030012 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It's necessary to monitor lockless pagetable walks, in order to avoid doi= ng THP splitting/collapsing during them. On powerpc, we need to do some lockless pagetable walks from functions that already have disabled interrupts, specially from real mode with MSR[EE=3D0]. In these contexts, disabling/enabling interrupts can be very troubling. So, this arch-specific implementation features functions with an extra argument that allows interrupt enable/disable to be skipped: __begin_lockless_pgtbl_walk() and __end_lockless_pgtbl_walk(). Functions similar to the generic ones are also exported, by calling the above functions with parameter *able_irq =3D false. While there is no config option, the method is disabled and these functio= ns are only doing what was already needed to lockless pagetable walks (disabling interrupt). A memory barrier was also added just to make sure there is no speculative read outside the interrupt disabled area. Signed-off-by: Leonardo Bras --- arch/powerpc/include/asm/book3s/64/pgtable.h | 9 ++ arch/powerpc/mm/book3s64/pgtable.c | 117 +++++++++++++++++++ 2 files changed, 126 insertions(+) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/= include/asm/book3s/64/pgtable.h index b01624e5c467..8330b35cd28d 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -1372,5 +1372,14 @@ static inline bool pgd_is_leaf(pgd_t pgd) return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PTE)); } =20 +#define __HAVE_ARCH_LOCKLESS_PGTBL_WALK_CONTROL +unsigned long begin_lockless_pgtbl_walk(struct mm_struct *mm); +unsigned long __begin_lockless_pgtbl_walk(struct mm_struct *mm, + bool disable_irq); +void end_lockless_pgtbl_walk(struct mm_struct *mm, unsigned long irq_mas= k); +void __end_lockless_pgtbl_walk(struct mm_struct *mm, unsigned long irq_m= ask, + bool enable_irq); +int running_lockless_pgtbl_walk(struct mm_struct *mm); + #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_BOOK3S_64_PGTABLE_H_ */ diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s6= 4/pgtable.c index 75483b40fcb1..ae557fdce9a3 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -98,6 +98,123 @@ void serialize_against_pte_lookup(struct mm_struct *m= m) smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1); } =20 +/* + * Counting method to monitor lockless pagetable walks: + * Uses begin_lockless_pgtbl_walk and end_lockless_pgtbl_walk to track t= he + * number of lockless pgtable walks happening, and + * running_lockless_pgtbl_walk to return this value. + */ + +/* begin_lockless_pgtbl_walk: Must be inserted before a function call th= at does + * lockless pagetable walks, such as __find_linux_pte(). + * This version allows setting disable_irq=3Dfalse, so irqs are not touc= hed, which + * is quite useful for running when ints are already disabled (like re= al-mode) + */ + +inline unsigned long __begin_lockless_pgtbl_walk(struct mm_struct *mm, + bool disable_irq) +{ + unsigned long irq_mask =3D 0; + + if (IS_ENABLED(CONFIG_LOCKLESS_PAGE_TABLE_WALK_TRACKING)) + atomic_inc(&mm->lockless_pgtbl_walkers); + + /* + * Interrupts must be disabled during the lockless page table walk. + * That's because the deleting or splitting involves flushing TLBs, + * which in turn issues interrupts, that will block when disabled. + * + * When this function is called from realmode with MSR[EE=3D0], + * it's not needed to touch irq, since it's already disabled. + */ + if (disable_irq) + local_irq_save(irq_mask); + + /* + * This memory barrier pairs with any code that is either trying to + * delete page tables, or split huge pages. Without this barrier, + * the page tables could be read speculatively outside of interrupt + * disabling or reference counting. + */ + smp_mb(); + + return irq_mask; +} +EXPORT_SYMBOL(__begin_lockless_pgtbl_walk); + +/* begin_lockless_pgtbl_walk: Must be inserted before a function call th= at does + * lockless pagetable walks, such as __find_linux_pte(). + * This version is used by generic code, and always assume irqs being di= sabled + */ +unsigned long begin_lockless_pgtbl_walk(struct mm_struct *mm) +{ + return __begin_lockless_pgtbl_walk(mm, true); +} +EXPORT_SYMBOL(begin_lockless_pgtbl_walk); + +/* + * __end_lockless_pgtbl_walk: Must be inserted after the last use of a p= ointer + * returned by a lockless pagetable walk, such as __find_linux_pte() + * This version allows setting enable_irq=3Dfalse, so irqs are not touch= ed, which + * is quite useful for running when ints are already disabled (like re= al-mode) + */ +inline void __end_lockless_pgtbl_walk(struct mm_struct *mm, + unsigned long irq_mask, bool enable_irq) +{ + /* + * This memory barrier pairs with any code that is either trying to + * delete page tables, or split huge pages. Without this barrier, + * the page tables could be read speculatively outside of interrupt + * disabling or reference counting. + */ + smp_mb(); + + /* + * Interrupts must be disabled during the lockless page table walk. + * That's because the deleting or splitting involves flushing TLBs, + * which in turn issues interrupts, that will block when disabled. + * + * When this function is called from realmode with MSR[EE=3D0], + * it's not needed to touch irq, since it's already disabled. + */ + if (enable_irq) + local_irq_restore(irq_mask); + + if (IS_ENABLED(CONFIG_LOCKLESS_PAGE_TABLE_WALK_TRACKING)) + atomic_dec(&mm->lockless_pgtbl_walkers); +} +EXPORT_SYMBOL(__end_lockless_pgtbl_walk); + +/* + * end_lockless_pgtbl_walk: Must be inserted after the last use of a poi= nter + * returned by a lockless pagetable walk, such as __find_linux_pte() + * This version is used by generic code, and always assume irqs being en= abled + */ + +void end_lockless_pgtbl_walk(struct mm_struct *mm, unsigned long irq_mas= k) +{ + __end_lockless_pgtbl_walk(mm, irq_mask, true); +} +EXPORT_SYMBOL(end_lockless_pgtbl_walk); + +/* + * running_lockless_pgtbl_walk: Returns the number of lockless pagetable= walks + * currently running. If it returns 0, there is no running pagetable w= alk, and + * THP split/collapse can be safely done. This can be used to avoid mo= re + * expensive approaches like serialize_against_pte_lookup() + */ +int running_lockless_pgtbl_walk(struct mm_struct *mm) +{ + if (IS_ENABLED(CONFIG_LOCKLESS_PAGE_TABLE_WALK_TRACKING)) + return atomic_read(&mm->lockless_pgtbl_walkers); + + /* If disabled, must return > 0, so it fallback to sync method + * (serialize_against_pte_lookup) + */ + return 1; +} +EXPORT_SYMBOL(running_lockless_pgtbl_walk); + /* * We use this to invalidate a pmdp entry before switching from a * hugepte to regular pmd entry. --=20 2.20.1