From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0036C636D6 for ; Thu, 23 Feb 2023 17:28:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39F586B0072; Thu, 23 Feb 2023 12:28:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 34EAD6B0073; Thu, 23 Feb 2023 12:28:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 216626B0074; Thu, 23 Feb 2023 12:28:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 14BAD6B0072 for ; Thu, 23 Feb 2023 12:28:19 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C47324030B for ; Thu, 23 Feb 2023 17:28:18 +0000 (UTC) X-FDA: 80499240276.16.ADC6D67 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 617FDA000C for ; Thu, 23 Feb 2023 17:28:16 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FpSk5OVJ; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677173297; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=snzG72VKvggb1ZKXhMmveyHUY7N6W6ek1agWKHuhjKU=; b=x79d2JKcbRNc+o5HZJeYfk3DMr59vyDK9F79WpEUVvo7GuwT9pHY8VhnsGn/GvI5RVU+mb rC8SRb88lVdmls//wbBL5TaK6QCjpcZx0QIrdLh+R2qIoNGvpQkItE1BD8nE6dwjXpsRZ9 w7PKedCy2XECIujPfL+NR9YR6GKSrmI= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FpSk5OVJ; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677173297; a=rsa-sha256; cv=none; b=mHmO1crMivcqVxBDeVDpVFFfQ6ASuRTniz2v1uPqwkOmrM2Ro3AM2wGcIrosR6V4V+6lO5 B+/0Ib6jkX/P16qrZccDxkodanzH/hemSNhDaAE7HWcfrswoYeS4b0xRc0VDYMddLzzo7G 1B3yrngYUE1ZgLeN1MAb0Pf5oypUcVE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=snzG72VKvggb1ZKXhMmveyHUY7N6W6ek1agWKHuhjKU=; b=FpSk5OVJcsYR5wXtKv87Wk6po3 kGU1j8KY1zW3lvRigzqqy615/Vwn2ordy4Xec/k77uu9RhlkVBuN3gTYRH/c0XYxkkjwSLWHepHK+ l/RlvBl3r7OTqm+cAlafPIJgJlx+o+/YfYSs4ZVDLW9WIyXKhQAb3S1/CoOW/2sy5tOpILk9NsVkN lW1M8SrCnB6tLvOYZqq9xX/BcATVrQdu1+P+bnCgE1QRJgVJH/deJNjlzEI7W/l/3+8cNU/qEHZDS pO2tMTI97Tbg21032cYtOuEmjgIQF+2vMnLC/Q5Ek55POAj1CJE/GF0bEpBE6D1feiyEUupK1nqAI J+fS4jlg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pVFO6-00EY3k-FJ; Thu, 23 Feb 2023 17:28:10 +0000 Date: Thu, 23 Feb 2023 17:28:10 +0000 From: Matthew Wilcox To: Yin Fengwei Cc: linux-mm@kvack.org, akpm@linux-foundation.org, Sidhartha Kumar , Mike Kravetz , Jane Chu , Naoya Horiguchi Subject: Re: [PATCH 1/5] rmap: move hugetlb try_to_unmap to dedicated function Message-ID: References: <20230223083200.3149015-1-fengwei.yin@intel.com> <20230223083200.3149015-2-fengwei.yin@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230223083200.3149015-2-fengwei.yin@intel.com> X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 617FDA000C X-Stat-Signature: x15yx7bpybcdbhg7g95i5awi6uck43pj X-HE-Tag: 1677173296-470148 X-HE-Meta: U2FsdGVkX18ZIl1r/J6NGe6ec4PQ0V1SQQ9eZapS+5ZmEZdeFtlTLgzWvWMMrEqFp5qt+7AvnD+7c1UN2D/vYPtwgBE4AOFmqNjHMLJkGRRQBE4JdYTUgrTZLBg8eeSyd1unUYYbmHXuMo8OWzhjHpHFWDF3obsh7KadMeQBODhH7OcWadsdQtbykmD8CSVW0UI33oBdXFXMLsPsL8hdNTSEsrjVt27bZL0S1DT6tRVjTu1Kfe9meO0GhjmRclnwDPlFgG9aDRea4jT2ulIJlhQ58ALUx7gOGnu63j+3u177/OVaRYmAYdfh97dEcH7xUL7w6oYPxOtFG++vuRtzsoyPWpjgqvGIhU4PK0q6FxUtqIHusZA0F9Zec7q6qNRHvxbrtOaIKWK2lKWUmJiWdpGAxV1uto8gkBxu09SnJVq/7h1//y9cfaXeLb5GcJe08rb+sbfpWzD7+XgEvq0rkOjmtTgEWr9pNB+01g9JRO9NQPdwfXZUSsyGOE5HXAPQPiGdAuh4U2a8kFsyKkCSjW/La922MaYi+hcAJ9sfpgS3Qtorz9Ykk8CNILtli6Uj7z0H3HEipSOeTxRHBGytwpU56kBqTntsxEovFnJosjU64koC5oXeI8TjeAvx2zk/jek0ePJ2ASSMfAR/+UoMTnB3Xbxc7VWdly0o3XemmKEYJ2Bh8kvHCHh4ZU/z8ocAYAjksaXgCNsOWGoIyro+8kNnwUxF2IlrqZpM29Uhqjd9cjJXDj2goVw/m5PQ7LRTzqoAGeMMoVCVd79kpcp3351fmlJLDBHg22HsnTd62JW+3z+CN1kHhoMiQ8ZrGOyKwuIJl9g0yx+QsiDesN7+eLeIL5+YZ2wGHu7ffaWrXlv4f0WtdldoJ0GGUmes7+6s1hbbV+xoJsP0J1bsVjL+92v3Q+UvICyUgQRB2lGY1CZMcW80GmxWqvYi07e+FZhWCRHhcYEMnBLY93rIYlG SsE03K+0 s8KjLXtJzyiX6u8QFWm7QV+WEITlleCNIcKopor3lvJX4OrH/tVrHw/3hC/n6ykbc8AY7T4oxmxEnX4bqZ10xPTvmGcaGMK7cu5CcRLFJhCVeIsfCH7S2Jln7jjFAVAszOElsexN/DwqB52kqljpb8I+ZflsPVu/dP+8nH6r38HZXUcwhO07Hz1lOrGLEBqUaRa1vdTzMlGYkuyL7QVbqi2Bf6fX8bpe26eAh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Feb 23, 2023 at 04:31:56PM +0800, Yin Fengwei wrote: > It's to prepare the batched rmap update for large folio. > No need to looped handle hugetlb. Just handle hugetlb and > bail out early. > > Almost no functional change. Just one change to mm counter > update. This looks like a very worthwhile cleanup in its own right. Adding various hugetlb & memory poisoning experts for commentary on the mm counter change (search for three asterisks) > Signed-off-by: Yin Fengwei > --- > mm/rmap.c | 205 +++++++++++++++++++++++++++++++++--------------------- > 1 file changed, 126 insertions(+), 79 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 15ae24585fc4..e7aa63b800f7 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1443,6 +1443,108 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, > munlock_vma_folio(folio, vma, compound); > } > > +static bool try_to_unmap_one_hugetlb(struct folio *folio, > + struct vm_area_struct *vma, struct mmu_notifier_range range, > + struct page_vma_mapped_walk pvmw, unsigned long address, > + enum ttu_flags flags) > +{ > + struct mm_struct *mm = vma->vm_mm; > + pte_t pteval; > + bool ret = true, anon = folio_test_anon(folio); > + > + /* > + * The try_to_unmap() is only passed a hugetlb page > + * in the case where the hugetlb page is poisoned. > + */ > + VM_BUG_ON_FOLIO(!folio_test_hwpoison(folio), folio); > + /* > + * huge_pmd_unshare may unmap an entire PMD page. > + * There is no way of knowing exactly which PMDs may > + * be cached for this mm, so we must flush them all. > + * start/end were already adjusted above to cover this > + * range. > + */ > + flush_cache_range(vma, range.start, range.end); > + > + /* > + * To call huge_pmd_unshare, i_mmap_rwsem must be > + * held in write mode. Caller needs to explicitly > + * do this outside rmap routines. > + * > + * We also must hold hugetlb vma_lock in write mode. > + * Lock order dictates acquiring vma_lock BEFORE > + * i_mmap_rwsem. We can only try lock here and fail > + * if unsuccessful. > + */ > + if (!anon) { > + VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); > + if (!hugetlb_vma_trylock_write(vma)) { > + ret = false; > + goto out; > + } > + if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) { > + hugetlb_vma_unlock_write(vma); > + flush_tlb_range(vma, > + range.start, range.end); > + mmu_notifier_invalidate_range(mm, > + range.start, range.end); > + /* > + * The ref count of the PMD page was > + * dropped which is part of the way map > + * counting is done for shared PMDs. > + * Return 'true' here. When there is > + * no other sharing, huge_pmd_unshare > + * returns false and we will unmap the > + * actual page and drop map count > + * to zero. > + */ > + goto out; > + } > + hugetlb_vma_unlock_write(vma); > + } > + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); > + > + /* > + * Now the pte is cleared. If this pte was uffd-wp armed, > + * we may want to replace a none pte with a marker pte if > + * it's file-backed, so we don't lose the tracking info. > + */ > + pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval); > + > + /* Set the dirty flag on the folio now the pte is gone. */ > + if (pte_dirty(pteval)) > + folio_mark_dirty(folio); > + > + /* Update high watermark before we lower rss */ > + update_hiwater_rss(mm); > + > + if (folio_test_hwpoison(folio) && !(flags & TTU_IGNORE_HWPOISON)) { > + pteval = swp_entry_to_pte(make_hwpoison_entry(&folio->page)); > + set_huge_pte_at(mm, address, pvmw.pte, pteval); > + } > + > + /*** try_to_unmap_one() called dec_mm_counter for > + * (folio_test_hwpoison(folio) && !(flags & TTU_IGNORE_HWPOISON)) not > + * true case, looks incorrect. Change it to hugetlb_count_sub() here. > + */ > + hugetlb_count_sub(folio_nr_pages(folio), mm); > + > + /* > + * No need to call mmu_notifier_invalidate_range() it has be > + * done above for all cases requiring it to happen under page > + * table lock before mmu_notifier_invalidate_range_end() > + * > + * See Documentation/mm/mmu_notifier.rst > + */ > + page_remove_rmap(&folio->page, vma, folio_test_hugetlb(folio)); > + if (vma->vm_flags & VM_LOCKED) > + mlock_drain_local(); > + folio_put(folio); > + > +out: > + return ret; > +} > + > /* > * @arg: enum ttu_flags will be passed to this argument > */ > @@ -1506,86 +1608,37 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > break; > } > > + address = pvmw.address; > + if (folio_test_hugetlb(folio)) { > + ret = try_to_unmap_one_hugetlb(folio, vma, range, > + pvmw, address, flags); > + > + /* no need to loop for hugetlb */ > + page_vma_mapped_walk_done(&pvmw); > + break; > + } > + > subpage = folio_page(folio, > pte_pfn(*pvmw.pte) - folio_pfn(folio)); > - address = pvmw.address; > anon_exclusive = folio_test_anon(folio) && > PageAnonExclusive(subpage); > > - if (folio_test_hugetlb(folio)) { > - bool anon = folio_test_anon(folio); > - > + flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); > + /* Nuke the page table entry. */ > + if (should_defer_flush(mm, flags)) { > /* > - * The try_to_unmap() is only passed a hugetlb page > - * in the case where the hugetlb page is poisoned. > + * We clear the PTE but do not flush so potentially > + * a remote CPU could still be writing to the folio. > + * If the entry was previously clean then the > + * architecture must guarantee that a clear->dirty > + * transition on a cached TLB entry is written through > + * and traps if the PTE is unmapped. > */ > - VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); > - /* > - * huge_pmd_unshare may unmap an entire PMD page. > - * There is no way of knowing exactly which PMDs may > - * be cached for this mm, so we must flush them all. > - * start/end were already adjusted above to cover this > - * range. > - */ > - flush_cache_range(vma, range.start, range.end); > + pteval = ptep_get_and_clear(mm, address, pvmw.pte); > > - /* > - * To call huge_pmd_unshare, i_mmap_rwsem must be > - * held in write mode. Caller needs to explicitly > - * do this outside rmap routines. > - * > - * We also must hold hugetlb vma_lock in write mode. > - * Lock order dictates acquiring vma_lock BEFORE > - * i_mmap_rwsem. We can only try lock here and fail > - * if unsuccessful. > - */ > - if (!anon) { > - VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); > - if (!hugetlb_vma_trylock_write(vma)) { > - page_vma_mapped_walk_done(&pvmw); > - ret = false; > - break; > - } > - if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) { > - hugetlb_vma_unlock_write(vma); > - flush_tlb_range(vma, > - range.start, range.end); > - mmu_notifier_invalidate_range(mm, > - range.start, range.end); > - /* > - * The ref count of the PMD page was > - * dropped which is part of the way map > - * counting is done for shared PMDs. > - * Return 'true' here. When there is > - * no other sharing, huge_pmd_unshare > - * returns false and we will unmap the > - * actual page and drop map count > - * to zero. > - */ > - page_vma_mapped_walk_done(&pvmw); > - break; > - } > - hugetlb_vma_unlock_write(vma); > - } > - pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); > + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); > } else { > - flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); > - /* Nuke the page table entry. */ > - if (should_defer_flush(mm, flags)) { > - /* > - * We clear the PTE but do not flush so potentially > - * a remote CPU could still be writing to the folio. > - * If the entry was previously clean then the > - * architecture must guarantee that a clear->dirty > - * transition on a cached TLB entry is written through > - * and traps if the PTE is unmapped. > - */ > - pteval = ptep_get_and_clear(mm, address, pvmw.pte); > - > - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); > - } else { > - pteval = ptep_clear_flush(vma, address, pvmw.pte); > - } > + pteval = ptep_clear_flush(vma, address, pvmw.pte); > } > > /* > @@ -1604,14 +1657,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > > if (PageHWPoison(subpage) && !(flags & TTU_IGNORE_HWPOISON)) { > pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); > - if (folio_test_hugetlb(folio)) { > - hugetlb_count_sub(folio_nr_pages(folio), mm); > - set_huge_pte_at(mm, address, pvmw.pte, pteval); > - } else { > - dec_mm_counter(mm, mm_counter(&folio->page)); > - set_pte_at(mm, address, pvmw.pte, pteval); > - } > - > + dec_mm_counter(mm, mm_counter(&folio->page)); > + set_pte_at(mm, address, pvmw.pte, pteval); > } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { > /* > * The guest indicated that the page content is of no > -- > 2.30.2 >