From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2B24C54791 for ; Wed, 13 Mar 2024 09:03:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 52ADD8000C; Wed, 13 Mar 2024 05:03:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4DB3A940010; Wed, 13 Mar 2024 05:03:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C9DF8000C; Wed, 13 Mar 2024 05:03:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2A95F940010 for ; Wed, 13 Mar 2024 05:03:11 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D2A20A1133 for ; Wed, 13 Mar 2024 09:03:10 +0000 (UTC) X-FDA: 81891426540.24.149710A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf06.hostedemail.com (Postfix) with ESMTP id 001A318001C for ; Wed, 13 Mar 2024 09:03:08 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710320589; a=rsa-sha256; cv=none; b=TKBXYiTTgGXgYzS8JHRz/66XwaJfJduWwtZ8hnoT7aXX7ek6ihJsq+obiwscI5GBf1/Ixh iIoo+LqbU7vwy/rUP1nekiZlk1ONiaNpvMvqQb/mkd01vwAqYzXFxVmLq2CvIN8yq3aEyJ kfpzvGaeu3NcPrRGXyxVL3kG0v9FnE0= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710320589; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rOp4VaU5Qyk1JmQmJ0H+CEBSPpyBD6N1zFDo2zRIVjo=; b=prbDQBHJXE5IkMlvQblEZ9Rh4N+Ia/CAbD9E+EGnD8agaaBv5swp5t6LANwURy16ASN1W8 7VjMyOsNp0dEyh4ER6jCL++prisrR7s+1bpM3Pu9lbAMNtEw/ysZmbk9SfJnJtJLIjvZSz oZYMd5VgDBw5aPqdiG5zq1vPrIVz3eA= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D12C01007; Wed, 13 Mar 2024 02:03:44 -0700 (PDT) Received: from [10.57.67.164] (unknown [10.57.67.164]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A8C333F73F; Wed, 13 Mar 2024 02:03:05 -0700 (PDT) Message-ID: Date: Wed, 13 Mar 2024 09:03:03 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Content-Language: en-GB To: Barry Song <21cnbao@gmail.com>, Lance Yang Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Chris Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240311150058.1122862-1-ryan.roberts@arm.com> <20240311150058.1122862-7-ryan.roberts@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 001A318001C X-Stat-Signature: swozmx6hywqf4e7q9cdk8oja3o86uz8b X-HE-Tag: 1710320588-312317 X-HE-Meta: U2FsdGVkX1+ZKKdp2Z6VDSmUWeSIGstGb7YQHS4+PthNcuvsX27v0k1RYVBPot9yGTuHvKpQ+ayYstXYOarJwY1bcaEaAqaU86jUgxXb6qZymsKIvWgX+MKdHHzEoPkk5USQD2tx7lNWAM8nO6LPsT8ljVlFXJAtp/WGZfs0VnnCAA4sNVvVWciCyOWGr1/FnHNCaRyb3F0yRfCmjfGbL5ziiiN6gpcRg33jtZxkuT0PPJAFscamZulwA5QZsMG/osBDdPtB1Kgw3sVoiCgsW+VhTKqm1HROay5SekuylQicHx7+l++Ig2+P4zIAWaDa6CVjWsqplpAG/F7U9Dtj4F71IU81QrAtXe8oPM+I9dmZIC8iaBsUjR/r/0tfW4K5yQb7cYfhugZEYu6sdi7XSw6e0JZ4dqRFyM2i0yl7EWrs9sTeGsLSjmoKTYobAb6DAHakpsoXCr+WRd/nlt8kJ0/tPFUoQkgzxUGjnUALcnkt/cNWI6huqQKkLpXwTKw++yiKQkdvuwhXvjtyEqQhgXde5yEb97XmwKeur1542ufS19YmVtdYchaGl6WS4w80EwM6rBW4fZUGtKoiGx9wcXan2lBh3giRJQftDlGWmJwY9GF079cqBeLTjq9wkpkFIi/7bZ8yJY/sqFQnkm/l0xlL3JrxmdVKI24gNHodhaboJK5QAyLRu+2Xhksak+e3xceL47syL93JCmpKkZR2ElleHuwRoOQZ3kfDs0ts6vXrwEUDjmYbB7/hXGU3/JP9zEdC85ZNfwzdP5WNTrJd5u7xnxc9dAMY+Of8s266o0em3CTXa4W8eZfVL394qAAxJqW8E0afZj06gAyXI9Vtbyn4PAfiisw/RfGxtCehOFMwLf2uIJiNqyMxk5BlRTcJ07aMGsD48sAP4v/53wdoIHDnUGSiSxXFL8Hkan8KEjgoqLQcHcGL/d6luDLHGI4Vfp3b41zBaUx2jH6UNoH Va+yj4nF 3FVV4f/ZStoefRapWNH0ZNNHHsQBQUAVpivmFMSIRmjZoZCiW6at+ZUG2E0A1X+E5RCf2sgT3D47lo2oXrBaSeMnv1zc2X9tFbwreSonBhdIZUljBV/zDzWf1qTJDVjMLa1VS7amN3EdWS7Kn9LdSewT5tyl0G216CY5nNuvHKcvQ7Bj/ieu8KM/VgzvCQDehlT9p4gCwkvfl7St2NoSZFQMETULOoZpJmKW8HMvSOWQdrOQdSENNju/xR8sJfA+70yIJAKKzDLPASMSNL+iIIqVPh7daMdN3ziycNdp2A/hUcVlJzoFUFinL81/4JMq4nlFsxDw/390JpKQrp7HT7qqWiQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 13/03/2024 07:19, Barry Song wrote: > On Tue, Mar 12, 2024 at 4:01 AM Ryan Roberts wrote: >> >> Rework madvise_cold_or_pageout_pte_range() to avoid splitting any large >> folio that is fully and contiguously mapped in the pageout/cold vm >> range. This change means that large folios will be maintained all the >> way to swap storage. This both improves performance during swap-out, by >> eliding the cost of splitting the folio, and sets us up nicely for >> maintaining the large folio when it is swapped back in (to be covered in >> a separate series). >> >> Folios that are not fully mapped in the target range are still split, >> but note that behavior is changed so that if the split fails for any >> reason (folio locked, shared, etc) we now leave it as is and move to the >> next pte in the range and continue work on the proceeding folios. >> Previously any failure of this sort would cause the entire operation to >> give up and no folios mapped at higher addresses were paged out or made >> cold. Given large folios are becoming more common, this old behavior >> would have likely lead to wasted opportunities. >> >> While we are at it, change the code that clears young from the ptes to >> use ptep_test_and_clear_young(), which is more efficent than >> get_and_clear/modify/set, especially for contpte mappings on arm64, >> where the old approach would require unfolding/refolding and the new >> approach can be done in place. >> >> Signed-off-by: Ryan Roberts > > This looks so much better than our initial RFC. > Thank you for your excellent work! Thanks - its a team effort - I had your PoC and David's previous batching work to use as a template. > >> --- >> mm/madvise.c | 89 ++++++++++++++++++++++++++++++---------------------- >> 1 file changed, 51 insertions(+), 38 deletions(-) >> >> diff --git a/mm/madvise.c b/mm/madvise.c >> index 547dcd1f7a39..56c7ba7bd558 100644 >> --- a/mm/madvise.c >> +++ b/mm/madvise.c >> @@ -336,6 +336,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, >> LIST_HEAD(folio_list); >> bool pageout_anon_only_filter; >> unsigned int batch_count = 0; >> + int nr; >> >> if (fatal_signal_pending(current)) >> return -EINTR; >> @@ -423,7 +424,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, >> return 0; >> flush_tlb_batched_pending(mm); >> arch_enter_lazy_mmu_mode(); >> - for (; addr < end; pte++, addr += PAGE_SIZE) { >> + for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) { >> + nr = 1; >> ptent = ptep_get(pte); >> >> if (++batch_count == SWAP_CLUSTER_MAX) { >> @@ -447,55 +449,66 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, >> continue; >> >> /* >> - * Creating a THP page is expensive so split it only if we >> - * are sure it's worth. Split it if we are only owner. >> + * If we encounter a large folio, only split it if it is not >> + * fully mapped within the range we are operating on. Otherwise >> + * leave it as is so that it can be swapped out whole. If we >> + * fail to split a folio, leave it in place and advance to the >> + * next pte in the range. >> */ >> if (folio_test_large(folio)) { >> - int err; >> - >> - if (folio_estimated_sharers(folio) > 1) >> - break; >> - if (pageout_anon_only_filter && !folio_test_anon(folio)) >> - break; >> - if (!folio_trylock(folio)) >> - break; >> - folio_get(folio); >> - arch_leave_lazy_mmu_mode(); >> - pte_unmap_unlock(start_pte, ptl); >> - start_pte = NULL; >> - err = split_folio(folio); >> - folio_unlock(folio); >> - folio_put(folio); >> - if (err) >> - break; >> - start_pte = pte = >> - pte_offset_map_lock(mm, pmd, addr, &ptl); >> - if (!start_pte) >> - break; >> - arch_enter_lazy_mmu_mode(); >> - pte--; >> - addr -= PAGE_SIZE; >> - continue; >> + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | >> + FPB_IGNORE_SOFT_DIRTY; >> + int max_nr = (end - addr) / PAGE_SIZE; >> + >> + nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, >> + fpb_flags, NULL); > > I wonder if we have a quick way to avoid folio_pte_batch() if users > are doing madvise() on a portion of a large folio. Good idea. Something like this?: if (pte_pfn(pte) == folio_pfn(folio) nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags, NULL); If we are not mapping the first page of the folio, then it can't be a full mapping, so no need to call folio_pte_batch(). Just split it. > >> + >> + if (nr < folio_nr_pages(folio)) { >> + int err; >> + >> + if (folio_estimated_sharers(folio) > 1) >> + continue; >> + if (pageout_anon_only_filter && !folio_test_anon(folio)) >> + continue; >> + if (!folio_trylock(folio)) >> + continue; >> + folio_get(folio); >> + arch_leave_lazy_mmu_mode(); >> + pte_unmap_unlock(start_pte, ptl); >> + start_pte = NULL; >> + err = split_folio(folio); >> + folio_unlock(folio); >> + folio_put(folio); >> + if (err) >> + continue; >> + start_pte = pte = >> + pte_offset_map_lock(mm, pmd, addr, &ptl); >> + if (!start_pte) >> + break; >> + arch_enter_lazy_mmu_mode(); >> + nr = 0; >> + continue; >> + } >> } >> >> /* >> * Do not interfere with other mappings of this folio and >> - * non-LRU folio. >> + * non-LRU folio. If we have a large folio at this point, we >> + * know it is fully mapped so if its mapcount is the same as its >> + * number of pages, it must be exclusive. >> */ >> - if (!folio_test_lru(folio) || folio_mapcount(folio) != 1) >> + if (!folio_test_lru(folio) || >> + folio_mapcount(folio) != folio_nr_pages(folio)) >> continue; > > This looks so perfect and is exactly what I wanted to achieve. > >> >> if (pageout_anon_only_filter && !folio_test_anon(folio)) >> continue; >> >> - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); >> - >> - if (!pageout && pte_young(ptent)) { >> - ptent = ptep_get_and_clear_full(mm, addr, pte, >> - tlb->fullmm); >> - ptent = pte_mkold(ptent); >> - set_pte_at(mm, addr, pte, ptent); >> - tlb_remove_tlb_entry(tlb, pte, addr); >> + if (!pageout) { >> + for (; nr != 0; nr--, pte++, addr += PAGE_SIZE) { >> + if (ptep_test_and_clear_young(vma, addr, pte)) >> + tlb_remove_tlb_entry(tlb, pte, addr); >> + } > > This looks so smart. if it is not pageout, we have increased pte > and addr here; so nr is 0 and we don't need to increase again in > for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) > > otherwise, nr won't be 0. so we will increase addr and > pte by nr. Indeed. I'm hoping that Lance is able to follow a similar pattern for madvise_free_pte_range(). > > >> } >> >> /* >> -- >> 2.25.1 >> > > Overall, LGTM, > > Reviewed-by: Barry Song Thanks!