From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 40C4A33DEE0 for ; Fri, 10 Apr 2026 10:32:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775817160; cv=none; b=pJxjStbT8qFgbvnggBT5MUW6KeQlXTQivlCIEdOje/ofHJjTMioqGWHz2XZiOuGANM5wKIXMgKhzXmU26oTt8nVZlGQZEDNSSelpu1AoMZL4rboNx+HQdcWIMmLcGxBuu4kwxpLz2M9/u/Gy33djJ8KPwXpZpKI1ha+yMqL64Vw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775817160; c=relaxed/simple; bh=KqK/BLpNLmdQmMsGUTYd1FsmAxRcfYnyllCg9XkaRKY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OilSvucTUO03hLFyOCzhd4WGVQDqAerOaa7id+Kmbhn+OHfQaSfd8kmzq1V7V2YQv2O5kzZgUITPE4ilygXFAcw/QCSE23xy8qsVmi+iuBzaFpLq8xujzmgEbx0eoQmEymt+6wcJifeoc4KsX9Q80M5ezHg+MDOoK8/VEQC2C9g= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=hlfgag3m; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="hlfgag3m" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 90C291D15; Fri, 10 Apr 2026 03:32:32 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1D7403FAF5; Fri, 10 Apr 2026 03:32:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1775817158; bh=KqK/BLpNLmdQmMsGUTYd1FsmAxRcfYnyllCg9XkaRKY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hlfgag3mwCqDdiXjfNp6lxjv9ajfCvqnk5tv2ugz0FTuP0baBeKU00MNal0bmhW/c wKEcljzN0/PwWMn7eJTz8IKalf1LNPmZMCozknskvDnUGYDX93b1f+AZTCCBfCQ/8J CPk8cUx3BxA54lJSKuwhbRRgJzyFaZfSDXpZ0DxE= From: Dev Jain To: akpm@linux-foundation.org, david@kernel.org, hughd@google.com, chrisl@kernel.org Cc: ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, kasong@tencent.com, qi.zheng@linux.dev, shakeel.butt@linux.dev, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, riel@surriel.com, harry@kernel.org, jannh@google.com, pfalcato@suse.de, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH v2 2/9] mm/rmap: refactor hugetlb pte clearing in try_to_unmap_one Date: Fri, 10 Apr 2026 16:01:57 +0530 Message-Id: <20260410103204.120409-3-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260410103204.120409-1-dev.jain@arm.com> References: <20260410103204.120409-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Simplify the code by refactoring the folio_test_hugetlb() branch into a new function. No functional change is intended. Signed-off-by: Dev Jain --- mm/rmap.c | 116 +++++++++++++++++++++++++++++++----------------------- 1 file changed, 67 insertions(+), 49 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 62a8c912fd788..a9c43e2f6e695 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1978,6 +1978,67 @@ static inline unsigned int folio_unmap_pte_batch(struct folio *folio, FPB_RESPECT_WRITE | FPB_RESPECT_SOFT_DIRTY); } +static inline bool unmap_hugetlb_folio(struct vm_area_struct *vma, + struct folio *folio, struct page_vma_mapped_walk *pvmw, + struct page *page, enum ttu_flags flags, pte_t *pteval, + struct mmu_notifier_range *range, bool *walk_done) +{ + /* + * The try_to_unmap() is only passed a hugetlb page + * in the case where the hugetlb page is poisoned. + */ + VM_WARN_ON_PAGE(!PageHWPoison(page), page); + /* + * huge_pmd_unshare may unmap an entire PMD page. + * There is no way of knowing exactly which PMDs may + * be cached for this mm, so we must flush them all. + * start/end were already adjusted above to cover this + * range. + */ + flush_cache_range(vma, range->start, range->end); + + /* + * To call huge_pmd_unshare, i_mmap_rwsem must be + * held in write mode. Caller needs to explicitly + * do this outside rmap routines. + * + * We also must hold hugetlb vma_lock in write mode. + * Lock order dictates acquiring vma_lock BEFORE + * i_mmap_rwsem. We can only try lock here and fail + * if unsuccessful. + */ + if (!folio_test_anon(folio)) { + struct mmu_gather tlb; + + VM_WARN_ON(!(flags & TTU_RMAP_LOCKED)); + if (!hugetlb_vma_trylock_write(vma)) { + *walk_done = true; + return false; + } + + tlb_gather_mmu_vma(&tlb, vma); + if (huge_pmd_unshare(&tlb, vma, pvmw->address, pvmw->pte)) { + hugetlb_vma_unlock_write(vma); + huge_pmd_unshare_flush(&tlb, vma); + tlb_finish_mmu(&tlb); + /* + * The PMD table was unmapped, + * consequently unmapping the folio. + */ + *walk_done = true; + return true; + } + hugetlb_vma_unlock_write(vma); + tlb_finish_mmu(&tlb); + } + *pteval = huge_ptep_clear_flush(vma, pvmw->address, pvmw->pte); + if (pte_dirty(*pteval)) + folio_mark_dirty(folio); + + *walk_done = false; + return true; +} + /* * @arg: enum ttu_flags will be passed to this argument */ @@ -2115,56 +2176,13 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, PageAnonExclusive(subpage); if (folio_test_hugetlb(folio)) { - bool anon = folio_test_anon(folio); - - /* - * The try_to_unmap() is only passed a hugetlb page - * in the case where the hugetlb page is poisoned. - */ - VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); - /* - * huge_pmd_unshare may unmap an entire PMD page. - * There is no way of knowing exactly which PMDs may - * be cached for this mm, so we must flush them all. - * start/end were already adjusted above to cover this - * range. - */ - flush_cache_range(vma, range.start, range.end); + bool walk_done; - /* - * To call huge_pmd_unshare, i_mmap_rwsem must be - * held in write mode. Caller needs to explicitly - * do this outside rmap routines. - * - * We also must hold hugetlb vma_lock in write mode. - * Lock order dictates acquiring vma_lock BEFORE - * i_mmap_rwsem. We can only try lock here and fail - * if unsuccessful. - */ - if (!anon) { - struct mmu_gather tlb; - - VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); - if (!hugetlb_vma_trylock_write(vma)) - goto walk_abort; - - tlb_gather_mmu_vma(&tlb, vma); - if (huge_pmd_unshare(&tlb, vma, address, pvmw.pte)) { - hugetlb_vma_unlock_write(vma); - huge_pmd_unshare_flush(&tlb, vma); - tlb_finish_mmu(&tlb); - /* - * The PMD table was unmapped, - * consequently unmapping the folio. - */ - goto walk_done; - } - hugetlb_vma_unlock_write(vma); - tlb_finish_mmu(&tlb); - } - pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); - if (pte_dirty(pteval)) - folio_mark_dirty(folio); + ret = unmap_hugetlb_folio(vma, folio, &pvmw, subpage, + flags, &pteval, &range, + &walk_done); + if (walk_done) + goto walk_done; } else if (likely(pte_present(pteval))) { nr_pages = folio_unmap_pte_batch(folio, &pvmw, flags, pteval); end_addr = address + nr_pages * PAGE_SIZE; -- 2.34.1