From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 713D1C7EE30 for ; Thu, 26 Jun 2025 13:53:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 152C36B0098; Thu, 26 Jun 2025 09:53:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 12A4A6B0099; Thu, 26 Jun 2025 09:53:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 066F26B009A; Thu, 26 Jun 2025 09:53:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E52646B0098 for ; Thu, 26 Jun 2025 09:53:26 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 90FE6160215 for ; Thu, 26 Jun 2025 13:53:26 +0000 (UTC) X-FDA: 83597694012.02.F2690AA Received: from out-171.mta0.migadu.com (out-171.mta0.migadu.com [91.218.175.171]) by imf01.hostedemail.com (Postfix) with ESMTP id 899EA4000A for ; Thu, 26 Jun 2025 13:53:24 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=q3Rv2EQw; spf=pass (imf01.hostedemail.com: domain of lance.yang@linux.dev designates 91.218.175.171 as permitted sender) smtp.mailfrom=lance.yang@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750946004; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jxJ/+GIT8i2T2VJ1n6tfW4Ns7LsKzLgWRa8bu373EG0=; b=XyO+mtuX2YA038mr1UySOO3xreYXeSHD62AYd/w2LORBflhCw6DObLGa+xcEdk2m4hjo5J eWt48xkzk+WES5c2eYHMVm2WyNv4ECYRX9o6ddzuPZKgnkjp6sja9+KSU/KNH9HTLRlvtg /aQq5JNi5raJr0TarifaQ85zBUifmoU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750946004; a=rsa-sha256; cv=none; b=VANW5BpmOOXcuYk7A4C/xYoOep/JAhT02u/8mtHfwBCnC3LYErjuqTyp375gxW3zKSYKYu QU8yzLdzV7g2J1bTgDTJYb3mIKxxrXSgvJt2c6Iot8ctlqB3TqybjxDUIeaoTl17GvJ1CJ P4tbiezBgvC1dxDBY7nuYRRuw/Iy0cY= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=q3Rv2EQw; spf=pass (imf01.hostedemail.com: domain of lance.yang@linux.dev designates 91.218.175.171 as permitted sender) smtp.mailfrom=lance.yang@linux.dev; dmarc=pass (policy=none) header.from=linux.dev Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1750946001; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jxJ/+GIT8i2T2VJ1n6tfW4Ns7LsKzLgWRa8bu373EG0=; b=q3Rv2EQwghbWQOo1XaSDbcFu49quE13kRIvVREM0PKjnsNVV1rc3AS10a0Mx3LudcEXvHs sDaC2vuB+PvgI3l8Q1InsFKXX5w4xajOJHTVvZnsY2a+Aa4C7FzZrtH3nn7i95qwN2ot5Z QHjYdgduBbJf/3HTfvAdwv0zsn5hlAM= Date: Thu, 26 Jun 2025 21:52:58 +0800 MIME-Version: 1.0 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Lance Yang Subject: Re: [PATCH v4 3/4] mm: Support batched unmap for lazyfree large folios during reclamation To: David Hildenbrand Cc: 21cnbao@gmail.com, akpm@linux-foundation.org, baolin.wang@linux.alibaba.com, chrisl@kernel.org, kasong@tencent.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, v-songbaohua@oppo.com, x86@kernel.org, ying.huang@intel.com, zhengtangquan@oppo.com, Lance Yang References: <20250626092905.31305-1-ioworker0@gmail.com> <20250626124445.77865-1-ioworker0@gmail.com> <1a55f9f3-f5b1-4761-97ba-423756c707fe@redhat.com> Content-Language: en-US In-Reply-To: <1a55f9f3-f5b1-4761-97ba-423756c707fe@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 899EA4000A X-Stat-Signature: 7gqyhup4sqj67exmmz9h6qo7esqpmmh4 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1750946004-265463 X-HE-Meta: U2FsdGVkX1/dBFG63evkYJ8VWECjgrp+wAvwXvqbSTaG9rJuvAnU7iedYeUHKoZA5DmrlybjYJ9he+IuXMGhoRrp1bN4a0/sZKV/J6fJuDMltcwRrVkpFN72A09cFmRIDvGzQ4JMOPuwIrHE1S/MEA6vHmUR3UNvaHlSzx6nHWj7PCP6xI/4+NCemZffJj3TYBJQV0y7N0e/n7Q1lDISan9UyInUYC/gSAf0x+/8chSUmUC1x2SkjZgygj3wEJKE0L8XwJUdUK4Bu2YJ1gA980Lm5YbAk7PqwBy1pOEKhOVNkHSv/vqmv24rF3L0dtyFk3EluwPC7frcbAkf3+EvRBR03uanUEei/S2lwg2tDNRG9cePqh2HmNPYG0BkLij8N783CTvuj0QswYX7iRN+3GC94uiacm6JIVb01g/SBAbRwZ4jYQK+xmOdwBjfNBO11MteguM9QW6TT8gcuMcNIZXH/UKnfu15sD4fnyi4pJZYg6xG86nOPRJHwXK6RFF0RP66CIXS9sIiD8L8XXmngqaeoEsXkZj6VUTO09rt1uGUJKEmrj/vpMOmqbk70RuXRYb0Hfk2kHQ2X4ZbDqJZFvW4l0n0V8YW2v1W3JaU72NZ8NE+jcHk6KoQ5gxOgf2f9SYghv8qu7pCeOOl+hV1CtD9WUql8R/ZVpJvroi9KDP4sfcxrOQDYoYKAyNp+Rlgn4BRY5U1lif1obPLs1Q2MLbzAsj1UNEUeV/NTXengd/GFO2sfQ70134NE6Uh+TEmUIParLVl1H+AiuTlRA/7HRZpL3+MfmDu3YECK42iCf9VXSkHOZ9bO9EpLDBkDGB09Ir6NeuBPZ2qIt1hWXhzOlC8GCEZagDfX3G9/3x+xlbMq3x8xPC5x7tS1/Oi9be6wtKcT0fBsYi977xAsTcaypNjDTKIBvQqOUp5HcCP6SYEw0wNbi6qd8jPV04Ahv3kEIyXjOAyl8DqmGsLbW0 tXqkFIIS tCSX8YLBpGt2lYSGIEr18gdATrbEf7djGzG6rXMbYyxGapKfEF9O8jr+XiagNzX9XQSBcnzcShhlk++aKuTQMhGQs/ak6ADMV8Lk7nnXPwnxrgL4HaIn8dFLASX2E4ftJHYdtx6UpkC7LjoiQvrSHDLJRZaFyuW1pYSHptbXJXM0XcetkYGiZcbhqpwWOgY59bJVqHSAWfRjYblLb6fTCZU63nyvMvhHRPmlR5t3Gvl7aq8v4VpEHurUH8Qh4Z5lVeUlQgx7kcTxfF7xjz/Xq1ZulXg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/6/26 21:16, David Hildenbrand wrote: > On 26.06.25 14:44, Lance Yang wrote: >> >> On 2025/6/26 17:29, Lance Yang wrote: >>> Before I send out the real patch, I'd like to get some quick feedback to >>> ensure I've understood the discussion correctly ;) >>> >>> Does this look like the right direction? >>> >>> diff --git a/mm/rmap.c b/mm/rmap.c >>> index fb63d9256f09..5ebffe2137e4 100644 >>> --- a/mm/rmap.c >>> +++ b/mm/rmap.c >>> @@ -1845,23 +1845,37 @@ void folio_remove_rmap_pud(struct folio >>> *folio, struct page *page, >>>    #endif >>>    } >>> -/* We support batch unmapping of PTEs for lazyfree large folios */ >>> -static inline bool can_batch_unmap_folio_ptes(unsigned long addr, >>> -            struct folio *folio, pte_t *ptep) >>> +static inline unsigned int folio_unmap_pte_batch(struct folio *folio, >>> +            struct page_vma_mapped_walk *pvmw, >>> +            enum ttu_flags flags, pte_t pte) >>>    { >>>        const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; >>> -    int max_nr = folio_nr_pages(folio); >>> -    pte_t pte = ptep_get(ptep); >>> +    unsigned long end_addr, addr = pvmw->address; >>> +    struct vm_area_struct *vma = pvmw->vma; >>> +    unsigned int max_nr; >>> + >>> +    if (flags & TTU_HWPOISON) >>> +        return 1; >>> +    if (!folio_test_large(folio)) >>> +        return 1; >>> +    /* We may only batch within a single VMA and a single page >>> table. */ >>> +    end_addr = pmd_addr_end(addr, vma->vm_end); >>> +    max_nr = (end_addr - addr) >> PAGE_SHIFT; >>> + >>> +    /* We only support lazyfree batching for now ... */ >>>        if (!folio_test_anon(folio) || folio_test_swapbacked(folio)) >>> -        return false; >>> +        return 1; >>>        if (pte_unused(pte)) >>> -        return false; >>> -    if (pte_pfn(pte) != folio_pfn(folio)) >>> -        return false; >>> +        return 1; >>> + >>> +    /* ... where we must be able to batch the whole folio. */ >>> +    if (pte_pfn(pte) != folio_pfn(folio) || max_nr != >>> folio_nr_pages(folio)) >>> +        return 1; >>> +    max_nr = folio_pte_batch(folio, addr, pvmw->pte, pte, max_nr, >>> fpb_flags, >>> +                 NULL, NULL, NULL); >>> -    return folio_pte_batch(folio, addr, ptep, pte, max_nr, >>> fpb_flags, NULL, >>> -                   NULL, NULL) == max_nr; >>> +    return (max_nr != folio_nr_pages(folio)) ? 1 : max_nr; >>>    } >>>    /* >>> @@ -2024,9 +2038,7 @@ static bool try_to_unmap_one(struct folio >>> *folio, struct vm_area_struct *vma, >>>                if (pte_dirty(pteval)) >>>                    folio_mark_dirty(folio); >>>            } else if (likely(pte_present(pteval))) { >>> -            if (folio_test_large(folio) && !(flags & TTU_HWPOISON) && >>> -                can_batch_unmap_folio_ptes(address, folio, pvmw.pte)) >>> -                nr_pages = folio_nr_pages(folio); >>> +            nr_pages = folio_unmap_pte_batch(folio, &pvmw, flags, >>> pteval); >>>                end_addr = address + nr_pages * PAGE_SIZE; >>>                flush_cache_range(vma, address, end_addr); >>> @@ -2206,13 +2218,16 @@ static bool try_to_unmap_one(struct folio >>> *folio, struct vm_area_struct *vma, >>>                hugetlb_remove_rmap(folio); >>>            } else { >>>                folio_remove_rmap_ptes(folio, subpage, nr_pages, vma); >>> -            folio_ref_sub(folio, nr_pages - 1); >>>            } >>>            if (vma->vm_flags & VM_LOCKED) >>>                mlock_drain_local(); >>> -        folio_put(folio); >>> -        /* We have already batched the entire folio */ >>> -        if (nr_pages > 1) >>> +        folio_put_refs(folio, nr_pages); >>> + >>> +        /* >>> +         * If we are sure that we batched the entire folio and cleared >>> +         * all PTEs, we can just optimize and stop right here. >>> +         */ >>> +        if (nr_pages == folio_nr_pages(folio)) >>>                goto walk_done; >>>            continue; >>>    walk_abort: >>> -- >> >> Oops ... Through testing on my machine, I found that the logic doesn't >> behave as expected because I messed up the meaning of max_nr (the >> available >> scan room in the page table) with folio_nr_pages(folio) :( >> >> With the following change: >> >> diff --git a/mm/rmap.c b/mm/rmap.c >> index 5ebffe2137e4..b1407348e14e 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1850,9 +1850,9 @@ static inline unsigned int >> folio_unmap_pte_batch(struct folio *folio, >>               enum ttu_flags flags, pte_t pte) >>   { >>       const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; >> +    unsigned int max_nr, nr_pages = folio_nr_pages(folio); >>       unsigned long end_addr, addr = pvmw->address; >>       struct vm_area_struct *vma = pvmw->vma; >> -    unsigned int max_nr; >>       if (flags & TTU_HWPOISON) >>           return 1; >> @@ -1870,12 +1870,13 @@ static inline unsigned int >> folio_unmap_pte_batch(struct folio *folio, >>           return 1; >>       /* ... where we must be able to batch the whole folio. */ > > Why is that still required? :) Sorry ... I was still stuck in the "all-or-nothing" mindset ... So, IIUC, you mean we should completely remove the "max_nr < nr_pages" check and just let folio_pte_batch handle whatever partial batch it safely can. > >> -    if (pte_pfn(pte) != folio_pfn(folio) || max_nr != >> folio_nr_pages(folio)) >> +    if (pte_pfn(pte) != folio_pfn(folio) || max_nr < nr_pages) >>           return 1; >> -    max_nr = folio_pte_batch(folio, addr, pvmw->pte, pte, max_nr, >> fpb_flags, >> -                 NULL, NULL, NULL); >> -    return (max_nr != folio_nr_pages(folio)) ? 1 : max_nr; >> +    max_nr = folio_pte_batch(folio, addr, pvmw->pte, pte, nr_pages, >> +                 fpb_flags, NULL, NULL, NULL); >> + >> +    return (max_nr != nr_pages) ? 1 : max_nr; > > Why is that still required? :) Then simply return the number of PTEs that consecutively map to the large folio. Right?