From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4B0C939A061 for ; Wed, 6 May 2026 09:45:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778060739; cv=none; b=bf79wutgZRzLClqTIfsNZL0pY3qG/LuESc6hGAudhQq3tnvHT/stcLS5BqjGx5MPIIYPoY4Q/G6Za+Xtzffth2lxveZO1whwoTXme/zDFiGhZNZtCbInFswf2gjyoPzOU7e3jzLxEHIqxTQt8hNKZOXkjvg+8BtI/UIe6Iwttm4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778060739; c=relaxed/simple; bh=WFSAinoeSdGbZS0HtLKn0OlKsR2qi4l9xuRSCZXTgnA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hDGIOZt88izFj7gAgf7aZs5MrgmhOw7LBGX9yD8EnYis3tM/R6dPtzw9tkW3imBiaZn0CaOIwyTntkoaKLpPOZJs9v51eYhGypShApskWJTtwqzK08YF/GrGZB1ny/IVf9Q8F6oC3LtgjjnFZcnKYJ6ZRZgeSZjZ8XRI30NKxkU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=pzJ0nG/t; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="pzJ0nG/t" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 49C163328; Wed, 6 May 2026 02:45:31 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BAB2B3F7B4; Wed, 6 May 2026 02:45:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778060736; bh=WFSAinoeSdGbZS0HtLKn0OlKsR2qi4l9xuRSCZXTgnA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pzJ0nG/tFSWCFnx4LKVFc/jvZ/e6hA2Xe3F7Qg6/50lgBzXX+YLRGWHvyUuOtbZ/7 gkwcrbZxsLf9bmVVRJMILFpbFgEHDsFgMoonN4g1UQnx8cn+yq/AI3achiIgKhC4Fe CtxbYr29XumOkeuYeI6s8Tp3boOJd510gW8Gdv7o= From: Dev Jain To: akpm@linux-foundation.org, david@kernel.org, ljs@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com Cc: Dev Jain , riel@surriel.com, liam@infradead.org, vbabka@kernel.org, harry@kernel.org, jannh@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, qi.zheng@linux.dev, shakeel.butt@linux.dev, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, rppt@kernel.org, surenb@google.com, mhocko@suse.com, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, pfalcato@suse.de, ryan.roberts@arm.com, anshuman.khandual@arm.com Subject: [PATCH v3 2/9] mm/rmap: refactor hugetlb pte clearing in try_to_unmap_one Date: Wed, 6 May 2026 15:14:57 +0530 Message-Id: <20260506094504.2588857-3-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260506094504.2588857-1-dev.jain@arm.com> References: <20260506094504.2588857-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Simplify the code by refactoring the folio_test_hugetlb() branch into a new function. While at it, convert BUG helpers to WARN helpers. Signed-off-by: Dev Jain --- mm/rmap.c | 117 ++++++++++++++++++++++++++++++++---------------------- 1 file changed, 69 insertions(+), 48 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index a5f067a09de0f..a98acdea0530a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1978,6 +1978,68 @@ static inline unsigned int folio_unmap_pte_batch(struct folio *folio, FPB_RESPECT_WRITE | FPB_RESPECT_SOFT_DIRTY); } +/* Returns false if unmap needs to be aborted */ +static inline bool unmap_hugetlb_folio(struct vm_area_struct *vma, + struct folio *folio, struct page_vma_mapped_walk *pvmw, + struct page *page, enum ttu_flags flags, pte_t *pteval, + struct mmu_notifier_range *range, bool *exit_walk) +{ + /* + * The try_to_unmap() is only passed a hugetlb page + * in the case where the hugetlb page is poisoned. + */ + VM_WARN_ON_PAGE(!PageHWPoison(page), page); + /* + * huge_pmd_unshare may unmap an entire PMD page. + * There is no way of knowing exactly which PMDs may + * be cached for this mm, so we must flush them all. + * start/end were already adjusted above to cover this + * range. + */ + flush_cache_range(vma, range->start, range->end); + + /* + * To call huge_pmd_unshare, i_mmap_rwsem must be + * held in write mode. Caller needs to explicitly + * do this outside rmap routines. + * + * We also must hold hugetlb vma_lock in write mode. + * Lock order dictates acquiring vma_lock BEFORE + * i_mmap_rwsem. We can only try lock here and fail + * if unsuccessful. + */ + if (!folio_test_anon(folio)) { + struct mmu_gather tlb; + + VM_WARN_ON(!(flags & TTU_RMAP_LOCKED)); + if (!hugetlb_vma_trylock_write(vma)) { + *exit_walk = true; + return false; + } + + tlb_gather_mmu_vma(&tlb, vma); + if (huge_pmd_unshare(&tlb, vma, pvmw->address, pvmw->pte)) { + hugetlb_vma_unlock_write(vma); + huge_pmd_unshare_flush(&tlb, vma); + tlb_finish_mmu(&tlb); + /* + * The PMD table was unmapped, + * consequently unmapping the folio. + */ + *exit_walk = true; + return true; + } + hugetlb_vma_unlock_write(vma); + tlb_finish_mmu(&tlb); + } + *pteval = huge_ptep_clear_flush(vma, pvmw->address, pvmw->pte); + if (pte_dirty(*pteval)) + folio_mark_dirty(folio); + + *exit_walk = false; + return true; +} + /* * @arg: enum ttu_flags will be passed to this argument */ @@ -2115,56 +2177,15 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, PageAnonExclusive(subpage); if (folio_test_hugetlb(folio)) { - bool anon = folio_test_anon(folio); - - /* - * The try_to_unmap() is only passed a hugetlb page - * in the case where the hugetlb page is poisoned. - */ - VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); - /* - * huge_pmd_unshare may unmap an entire PMD page. - * There is no way of knowing exactly which PMDs may - * be cached for this mm, so we must flush them all. - * start/end were already adjusted above to cover this - * range. - */ - flush_cache_range(vma, range.start, range.end); + bool exit_walk; - /* - * To call huge_pmd_unshare, i_mmap_rwsem must be - * held in write mode. Caller needs to explicitly - * do this outside rmap routines. - * - * We also must hold hugetlb vma_lock in write mode. - * Lock order dictates acquiring vma_lock BEFORE - * i_mmap_rwsem. We can only try lock here and fail - * if unsuccessful. - */ - if (!anon) { - struct mmu_gather tlb; - - VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); - if (!hugetlb_vma_trylock_write(vma)) - goto walk_abort; - - tlb_gather_mmu_vma(&tlb, vma); - if (huge_pmd_unshare(&tlb, vma, address, pvmw.pte)) { - hugetlb_vma_unlock_write(vma); - huge_pmd_unshare_flush(&tlb, vma); - tlb_finish_mmu(&tlb); - /* - * The PMD table was unmapped, - * consequently unmapping the folio. - */ - goto walk_done; - } - hugetlb_vma_unlock_write(vma); - tlb_finish_mmu(&tlb); + ret = unmap_hugetlb_folio(vma, folio, &pvmw, subpage, + flags, &pteval, &range, + &exit_walk); + if (exit_walk) { + page_vma_mapped_walk_done(&pvmw); + break; } - pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); - if (pte_dirty(pteval)) - folio_mark_dirty(folio); } else if (likely(pte_present(pteval))) { nr_pages = folio_unmap_pte_batch(folio, &pvmw, flags, pteval); end_addr = address + nr_pages * PAGE_SIZE; -- 2.34.1