From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CFA8C54EBE for ; Mon, 16 Jan 2023 19:28:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 874AF6B0072; Mon, 16 Jan 2023 14:28:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7EE956B0075; Mon, 16 Jan 2023 14:28:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DC076B0074; Mon, 16 Jan 2023 14:28:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5D3776B0072 for ; Mon, 16 Jan 2023 14:28:19 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 3ED8F140623 for ; Mon, 16 Jan 2023 19:28:19 +0000 (UTC) X-FDA: 80361648318.22.1A6A723 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id B213B1C0006 for ; Mon, 16 Jan 2023 19:28:17 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="v3hF/uFF"; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673897297; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GneMqsnAm8fySIb0EvmMhr9P39CurkxRK/RQO5x4bJY=; b=XdKTUtsWrTTeocjMJ6BF+ncatNRte7iljFDoXnmcPmN6H7X43Ip80J0K48gNoWDRaRjb0F 15YZyK4MkWRXG6TObUZCXld5UvEtcUxpUAwK/dtGqXP8979mgrmmUAiQKrD3ka1+RQK2KE xH6YaQNWSd2tdDUhhV7ehmakS8e96GY= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="v3hF/uFF"; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673897297; a=rsa-sha256; cv=none; b=oOmDmAh8mz+/xlbuYfxE7tUh4fLDDtEHtDpq0RYdugVf7Xlpv4h9LGLsa5m2phpqrxWlsv rewOm6ZapjZM/BchIvt48idbaqMLVDbdnuwwFLpK4FwuXyUPqcGpb7y6d200k24hKWOjtW AUg7ANL1+k45oGzHW2ALRgCeBuu5cQw= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GneMqsnAm8fySIb0EvmMhr9P39CurkxRK/RQO5x4bJY=; b=v3hF/uFFD6zsVo8ekc5wZWI3x/ LcbVeMNy01265NFqC/BW4poOTHiFwo+JUhEiiRkGjkblIS9RpHZmji0/RFKG224J3n6Ajv/U86Kt2 myhLk2c8ziBGBvc6EoWg7vD5u6s96PPjp4P1OEt/jE1h9/OemCkHRz8HSrU1Vi6W0EKQ84vtxFTHO ABYPe1U2DOtibUhlvylNaaSW7z4lCkqL6XT0vfrLsgXa9+IgldrWLs8tnseJs07cVR5ZBLKSqr92X gxxIVGFBjrouWcmXH4rH95k7GnOMywatY047cbGMi0TZd7DBimJOa1dfc0V93uDRH94ku/+b5RtcG /SmeufiQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pHV9h-0090T0-EL; Mon, 16 Jan 2023 19:28:29 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 2/4] mm: Remove mlock_vma_page() Date: Mon, 16 Jan 2023 19:28:25 +0000 Message-Id: <20230116192827.2146732-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230116192827.2146732-1-willy@infradead.org> References: <20230116192827.2146732-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: jp718pjxdpi7r99k7uh3eucb9pcqr56m X-Rspamd-Queue-Id: B213B1C0006 X-HE-Tag: 1673897297-109833 X-HE-Meta: U2FsdGVkX1805YjnT88JvlKeDukaLRMzmdv70rEUd6ihg+3tznVFGDmDFNBuv45/r77mva5+/sjPrdF+lPPwN6L+xxcj0yzuaVQpZjkwRZkLpIR4FOkAuPMH1t4llQqE4LBwB0D0/Ri0PbTIfZ7MrSEk/DS1riZL/rxlNBYeUF5eKdIBKu+T1VcT1dne4g7JBKqRy6eckswEaq15cgPJv4YBUlDvetJogZQG9rxtArwo8qrvENCqMZmKYC9uAzit/PwjxYpQFrbgm6o4s9RCumXfpXp+coAV5avIDVxgRIgV3TuFCXi7ZcPDOfL9s3a6k/tqfqGJwcDtHVfw9VLFXw9Z9NM2RxMAZ8eycD2SwS1Hk0qG92iIMgr37cPn3jHGLJAAmQkCzkJSyqnFXDSRWRdmgRkORxMWJWJ+eXIM/HSjozf8r46MBQKd34ZeBCwjmf3hYHs1votkwElwFWrwR3eG/6NFPzgJTz8EVkWXyVTPJy3A0D1MM9d17J1/LV0QOjdn6lYAnGBoMDx3gL3WLH5xiuaLNHtuIriQZAfD6Igr5VF0Q3m4xf32YEMDIT/y0xKirnK60W5rjv9HiUzfufo74M75thwcIxi8GwvFVz7yWudGtIB6LWrHbOvAI0c7uXBDuCSahVD+xD9G7J6w4MwftobUCfEoANP9z6toBnIUioFptkxAiCb23qbxZPGTSnNG8azCb5pGeMKO8HO9F0lx8OlnuTg2XKtIfC4Q5OgdYWxy9DhHB7qW/LzCQE6WpdIss75iZ0utN1WEezHaPBnyg3Quv1smp3h54tPJnLtpPGsgWpbUqxgY/QYyvwF6Jl4ho0MWM7Q6f/zRI//IkDR8/+/2pbOGOWqlKx1UgSz2uIzwW/umIe8dsfCxuaaai+6xatVJboZLPs1Xc9lZBzlqU/WXrgLqg4M68tOkLd74DR2jsHN8rA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All callers now have a folio and can call mlock_vma_folio(). Update the documentation to refer to mlock_vma_folio(). Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/mm/unevictable-lru.rst | 6 +++--- mm/internal.h | 10 +--------- mm/mlock.c | 4 ++-- mm/rmap.c | 4 ++-- 4 files changed, 8 insertions(+), 16 deletions(-) diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst index 1972d37d97cf..45aadfefb810 100644 --- a/Documentation/mm/unevictable-lru.rst +++ b/Documentation/mm/unevictable-lru.rst @@ -311,7 +311,7 @@ do end up getting faulted into this VM_LOCKED VMA, they will be handled in the fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled. For each PTE (or PMD) being faulted into a VMA, the page add rmap function -calls mlock_vma_page(), which calls mlock_folio() when the VMA is VM_LOCKED +calls mlock_vma_folio(), which calls mlock_folio() when the VMA is VM_LOCKED (unless it is a PTE mapping of a part of a transparent huge page). Or when it is a newly allocated anonymous page, folio_add_lru_vma() calls mlock_new_folio() instead: similar to mlock_folio(), but can make better @@ -413,7 +413,7 @@ However, since mlock_vma_pages_range() starts by setting VM_LOCKED on a VMA, before mlocking any pages already present, if one of those pages were migrated before mlock_pte_range() reached it, it would get counted twice in mlock_count. To prevent that, mlock_vma_pages_range() temporarily marks the VMA as VM_IO, -so that mlock_vma_page() will skip it. +so that mlock_vma_folio() will skip it. To complete page migration, we place the old and new pages back onto the LRU afterwards. The "unneeded" page - old page on success, new page on failure - @@ -552,6 +552,6 @@ and node unevictable list. rmap's folio_referenced_one(), called via vmscan's shrink_active_list() or shrink_page_list(), and rmap's try_to_unmap_one() called via shrink_page_list(), -check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_page() +check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_folio() to correct them. Such pages are culled to the unevictable list when released by the shrinker. diff --git a/mm/internal.h b/mm/internal.h index 74bc1fe45711..0b74105ea363 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -518,7 +518,7 @@ extern long faultin_vma_page_range(struct vm_area_struct *vma, extern int mlock_future_check(struct mm_struct *mm, unsigned long flags, unsigned long len); /* - * mlock_vma_page() and munlock_vma_page(): + * mlock_vma_folio() and munlock_vma_folio(): * should be called with vma's mmap_lock held for read or write, * under page table lock for the pte/pmd being added or removed. * @@ -547,12 +547,6 @@ static inline void mlock_vma_folio(struct folio *folio, mlock_folio(folio); } -static inline void mlock_vma_page(struct page *page, - struct vm_area_struct *vma, bool compound) -{ - mlock_vma_folio(page_folio(page), vma, compound); -} - void munlock_folio(struct folio *folio); static inline void munlock_vma_folio(struct folio *folio, @@ -656,8 +650,6 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf, } #else /* !CONFIG_MMU */ static inline void unmap_mapping_folio(struct folio *folio) { } -static inline void mlock_vma_page(struct page *page, - struct vm_area_struct *vma, bool compound) { } static inline void munlock_vma_page(struct page *page, struct vm_area_struct *vma, bool compound) { } static inline void mlock_new_folio(struct folio *folio) { } diff --git a/mm/mlock.c b/mm/mlock.c index 9e9c8be58277..b680f11879c3 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -370,9 +370,9 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma, /* * There is a slight chance that concurrent page migration, * or page reclaim finding a page of this now-VM_LOCKED vma, - * will call mlock_vma_page() and raise page's mlock_count: + * will call mlock_vma_folio() and raise page's mlock_count: * double counting, leaving the page unevictable indefinitely. - * Communicate this danger to mlock_vma_page() with VM_IO, + * Communicate this danger to mlock_vma_folio() with VM_IO, * which is a VM_SPECIAL flag not allowed on VM_LOCKED vmas. * mmap_lock is held in write mode here, so this weird * combination should not be visible to other mmap_lock users; diff --git a/mm/rmap.c b/mm/rmap.c index ab2246e6f20a..1934f9dc9758 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1261,7 +1261,7 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, __page_check_anon_rmap(page, vma, address); } - mlock_vma_page(page, vma, compound); + mlock_vma_folio(folio, vma, compound); } /** @@ -1352,7 +1352,7 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, if (nr) __lruvec_stat_mod_folio(folio, NR_FILE_MAPPED, nr); - mlock_vma_page(page, vma, compound); + mlock_vma_folio(folio, vma, compound); } /** -- 2.35.1