From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D5BFC433F5 for ; Fri, 25 Mar 2022 01:14:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 649558D0005; Thu, 24 Mar 2022 21:14:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F7448D004D; Thu, 24 Mar 2022 21:14:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 472288D0005; Thu, 24 Mar 2022 21:14:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0238.hostedemail.com [216.40.44.238]) by kanga.kvack.org (Postfix) with ESMTP id 2716B8D004D for ; Thu, 24 Mar 2022 21:14:02 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D5BB28249980 for ; Fri, 25 Mar 2022 01:14:01 +0000 (UTC) X-FDA: 79281137082.27.3FAA8F1 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf03.hostedemail.com (Postfix) with ESMTP id 0DA2F20004 for ; Fri, 25 Mar 2022 01:14:00 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 90070CE25B0; Fri, 25 Mar 2022 01:13:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EB2BEC340ED; Fri, 25 Mar 2022 01:13:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1648170837; bh=1evT2eQQK0ThbMbghwf298P0VZ2eXQH+JPFKLTQBAPk=; h=Date:To:From:In-Reply-To:Subject:From; b=nYYC90o/a3u6d9nt+UnXHEmcMNTwqAEJXEMn3o0oZFYvkNb9sd212l+Fb1IecUqR9 S9xggDOT2IzRY9TGaNp8c6qLNQFvJd5x2J57yrPHsxvngBKTv/zvEomjBpPpBjEL4d 21qnw3EX9CVD0CtC1WunRA0kAdOSN2UILp0F4vjA= Date: Thu, 24 Mar 2022 18:13:56 -0700 To: zhangliang5@huawei.com,willy@infradead.org,vbabka@suse.cz,shy828301@gmail.com,shakeelb@google.com,rppt@linux.ibm.com,roman.gushchin@linux.dev,rientjes@google.com,riel@surriel.com,peterx@redhat.com,oleg@redhat.com,nadav.amit@gmail.com,mike.kravetz@oracle.com,mhocko@kernel.org,kirill.shutemov@linux.intel.com,jhubbard@nvidia.com,jgg@nvidia.com,jannh@google.com,jack@suse.cz,hughd@google.com,hch@lst.de,ddutile@redhat.com,aarcange@redhat.com,david@redhat.com,akpm@linux-foundation.org,patches@lists.linux.dev,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220324180758.96b1ac7e17675d6bc474485e@linux-foundation.org> Subject: [patch 107/114] mm/huge_memory: remove stale locking logic from __split_huge_pmd() Message-Id: <20220325011356.EB2BEC340ED@smtp.kernel.org> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0DA2F20004 X-Stat-Signature: uzdpskc5tocej89jp91349i7k757ir6t Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b="nYYC90o/"; dmarc=none; spf=pass (imf03.hostedemail.com: domain of akpm@linux-foundation.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org X-Rspam-User: X-HE-Tag: 1648170840-77455 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: David Hildenbrand Subject: mm/huge_memory: remove stale locking logic from __split_huge_pmd() Let's remove the stale logic that was required for reuse_swap_page(). [akpm@linux-foundation.org: simplification, per Yang Shi] Link: https://lkml.kernel.org/r/20220131162940.210846-10-david@redhat.com Signed-off-by: David Hildenbrand Acked-by: Vlastimil Babka Cc: Andrea Arcangeli Cc: Christoph Hellwig Cc: David Rientjes Cc: Don Dutile Cc: Hugh Dickins Cc: Jan Kara Cc: Jann Horn Cc: Jason Gunthorpe Cc: John Hubbard Cc: Kirill A. Shutemov Cc: Liang Zhang Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: Mike Kravetz Cc: Mike Rapoport Cc: Nadav Amit Cc: Oleg Nesterov Cc: Peter Xu Cc: Rik van Riel Cc: Roman Gushchin Cc: Shakeel Butt Cc: Yang Shi Signed-off-by: Andrew Morton --- mm/huge_memory.c | 40 ++++------------------------------------ 1 file changed, 4 insertions(+), 36 deletions(-) --- a/mm/huge_memory.c~mm-huge_memory-remove-stale-locking-logic-from-__split_huge_pmd +++ a/mm/huge_memory.c @@ -2133,8 +2133,6 @@ void __split_huge_pmd(struct vm_area_str { spinlock_t *ptl; struct mmu_notifier_range range; - bool do_unlock_folio = false; - pmd_t _pmd; mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, address & HPAGE_PMD_MASK, @@ -2153,42 +2151,12 @@ void __split_huge_pmd(struct vm_area_str goto out; } -repeat: - if (pmd_trans_huge(*pmd)) { - if (!folio) { - folio = page_folio(pmd_page(*pmd)); - /* - * An anonymous page must be locked, to ensure that a - * concurrent reuse_swap_page() sees stable mapcount; - * but reuse_swap_page() is not used on shmem or file, - * and page lock must not be taken when zap_pmd_range() - * calls __split_huge_pmd() while i_mmap_lock is held. - */ - if (folio_test_anon(folio)) { - if (unlikely(!folio_trylock(folio))) { - folio_get(folio); - _pmd = *pmd; - spin_unlock(ptl); - folio_lock(folio); - spin_lock(ptl); - if (unlikely(!pmd_same(*pmd, _pmd))) { - folio_unlock(folio); - folio_put(folio); - folio = NULL; - goto repeat; - } - folio_put(folio); - } - do_unlock_folio = true; - } - } - } else if (!(pmd_devmap(*pmd) || is_pmd_migration_entry(*pmd))) - goto out; - __split_huge_pmd_locked(vma, pmd, range.start, freeze); + if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || + is_pmd_migration_entry(*pmd)) + __split_huge_pmd_locked(vma, pmd, range.start, freeze); + out: spin_unlock(ptl); - if (do_unlock_folio) - folio_unlock(folio); /* * No need to double call mmu_notifier->invalidate_range() callback. * They are 3 cases to consider inside __split_huge_pmd_locked(): _