From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F0A5CD128A for ; Thu, 11 Apr 2024 11:27:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF15A6B008A; Thu, 11 Apr 2024 07:27:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA0D96B0092; Thu, 11 Apr 2024 07:27:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8F706B0095; Thu, 11 Apr 2024 07:27:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8CB326B008A for ; Thu, 11 Apr 2024 07:27:54 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 4CB6A80B94 for ; Thu, 11 Apr 2024 11:27:54 +0000 (UTC) X-FDA: 81997026468.07.B0DD69D Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf17.hostedemail.com (Postfix) with ESMTP id 84E5F4000C for ; Thu, 11 Apr 2024 11:27:52 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712834872; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9rGkwGx7wbFra7UW06q/mrgqnXK02p1SocMykaC3hwo=; b=nJZILASxfWDLumgIfnD7dRJc70ySbWOWt/iNAFit4E8POctTgOW+LL3T7kZ8qa1RI1YKZM 8gheMcWEVTAHLpj4S/4P90g5hUZN4Jke8/iPfT6/gVDaIQuMp/ra0PlyweoLjjRu2SWJM1 +ffoZr3piAqSTkJ9x8Ad2Go8uW7loXU= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf17.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712834872; a=rsa-sha256; cv=none; b=QSi0eKDiwg5w+G8vrF74p6qSS4sa7N7EGTUaXYQY8MWvm4gV22i8GFdg8g9nft0eD0afVe oLf+rFj+Xni0JhX10wsSwZLGww0JEm4bjs5MEKq5kbUfQYL3NoVWb8W4lTeoYwRaxAkhqJ LeByYuHQ1ybMOv4K8qOzT4/UFzO3S5M= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 56423113E; Thu, 11 Apr 2024 04:28:21 -0700 (PDT) Received: from [10.1.38.151] (XHFQ2J9959.cambridge.arm.com [10.1.38.151]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 229323F64C; Thu, 11 Apr 2024 04:27:49 -0700 (PDT) Message-ID: Date: Thu, 11 Apr 2024 12:27:47 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free Content-Language: en-GB To: David Hildenbrand , Lance Yang , akpm@linux-foundation.org Cc: 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240408042437.10951-1-ioworker0@gmail.com> <20240408042437.10951-2-ioworker0@gmail.com> <38c4add8-53a2-49ca-9f1b-f62c2ee3e764@arm.com> <013334d5-62d2-4256-8045-168893a0a0cf@redhat.com> From: Ryan Roberts In-Reply-To: <013334d5-62d2-4256-8045-168893a0a0cf@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 84E5F4000C X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: wm6t15ummtf6agbuqifrrqfb3aoc6bcp X-HE-Tag: 1712834872-561360 X-HE-Meta: U2FsdGVkX18D5e4vnd/s4p5+VnqbBGcCMUhPWRfEyPy6LdjwzY6AV38rp3H52lctv02kNlb2glDef85vWtAEE65h1WgOmVZEAhZVPz8NJ4tNgiQSaYpx+mqrOhP00ldVjvjkJOCUSlDp100po5JVkI5Ji7GbaG3RUyxQ0IxLaAHwIeWgz4Mtw7LHrnherg/1yy3vHNgiKzKhNeX0OdQm0FMSZAPzgL+8QTBw4aK1yjdMci2wsbjEb7NfKwQGFTZ5l533QusotXDlfsSkAE+8TbrgqPAvYtFVHRcgt7LM7L2Z1a8QshkUyIWMO9ULfRGB+wHYaMHw4uSq7ylxWaSattUCxofFxRo0SRqr+Nc88gLGWsekczDXh8j5cqoGpN5F/ssLen+FkfM+fY4pdjaW3fDWKx/S9IGbuzm/JhZdj1RTTLbvEyL9wpei/9aSvhQaPHXEIwRMacq/qmavabaCMEaeHsfK8GZ99LgksHZONl1PiIi7udwTeFQWkxQys873izhNONj9pFtqpq/2UlCmff0YmUfmg34WXoXZbO8pX4U4NB1WnoOc9xWcDHZ9UJJYXggiMHrP/oa66CdKT1E6sFuKQIgYMWBDlqPYLAwXnKzdSomT+lkT8e+tAAJV+M5mv9hkeXVqOuHnRWPFu77TzhJYBtEG19DPQJz/doYL4zCi7NkUP1+18QP0MEsABg+4M5Lr7k2wo3yhDHKPiCFEQBHBOtqa3CROuA38Yi96QMX5PymKUIhmFl2Eyo5Gf6cJUwg3xEtdK8NYm0LvnAo1T80lpqgGDFhXZIMpe6Bjz/gXOJFW9YRWqt5R1U2aXRPMIIYkNCv1uVMwim7PAdJt9Jk0HA2lOJALzv5SWQRta+tgPVq7EdviocL5vpS7dYydTFvWKdBzLTXuHhd2svbVsk99PfDeWuoQLFBGdc2UHH/heL2OWj+PH5yHPesw1yQaXC0tMSIqXTqD7Qi8vTr qN0HmWjZ NyISdt/X0lr2HIvIXCK9OmPje2mXNaLmikP82oZGPXTW8Q5c30xjzZnuFiat637hx9biC6TDVlNzijp6eI50GyrOnUeWhrB5Mmqj2wbptL6vILQmSLXr1Vp4Fn2776WT8OfoJdiu/eYDLjGRpTu76Elvf20wqGKxL8DMzmDA9II9Dvf+jq7f7YdioTlwaceTRS+OcZDlucHoOWHfcAfBwrhTcLgacI8yBTOAGwt+iRF1drX6rQb8RKN37KLL+yJqrz/j6PdIj3C9WwF+XzBeTKSxsfWfif6QuFt7zrZwrzKGgZ56u0Ok8Q8VdgRvIYRxm+ecJqn4Fgnstr/f7fp2Hc+7f9+E6Zw7qZJRCk2XVWO2lLrXyXb+jSiqii3YAQ0auME9v X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 11/04/2024 12:20, David Hildenbrand wrote: > On 11.04.24 13:11, Ryan Roberts wrote: >> On 08/04/2024 05:24, Lance Yang wrote: >>> This patch optimizes lazyfreeing with PTE-mapped mTHP[1] >>> (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio >>> splitting if the large folio is fully mapped within the target range. >>> >>> If a large folio is locked or shared, or if we fail to split it, we just >>> leave it in place and advance to the next PTE in the range. But note that >>> the behavior is changed; previously, any failure of this sort would cause >>> the entire operation to give up. As large folios become more common, >>> sticking to the old way could result in wasted opportunities. >>> >>> On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of >>> the same size results in the following runtimes for madvise(MADV_FREE) in >>> seconds (shorter is better): >>> >>> Folio Size |   Old    |   New    | Change >>> ------------------------------------------ >>>        4KiB | 0.590251 | 0.590259 |    0% >>>       16KiB | 2.990447 | 0.185655 |  -94% >>>       32KiB | 2.547831 | 0.104870 |  -95% >>>       64KiB | 2.457796 | 0.052812 |  -97% >>>      128KiB | 2.281034 | 0.032777 |  -99% >>>      256KiB | 2.230387 | 0.017496 |  -99% >>>      512KiB | 2.189106 | 0.010781 |  -99% >>>     1024KiB | 2.183949 | 0.007753 |  -99% >>>     2048KiB | 0.002799 | 0.002804 |    0% >>> >>> [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com >>> [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com >>> >>> Signed-off-by: Lance Yang >>> --- >>>   include/linux/pgtable.h |  34 +++++++++ >>>   mm/internal.h           |  12 +++- >>>   mm/madvise.c            | 149 ++++++++++++++++++++++------------------ >>>   mm/memory.c             |   4 +- >>>   4 files changed, 129 insertions(+), 70 deletions(-) >>> >>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h >>> index 0f4b2faa1d71..4dd442787420 100644 >>> --- a/include/linux/pgtable.h >>> +++ b/include/linux/pgtable.h >>> @@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_struct >>> *mm, >>>   } >>>   #endif >>>   +#ifndef mkold_clean_ptes >>> +/** >>> + * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio >>> + *        as old and clean. >>> + * @mm: Address space the pages are mapped into. >>> + * @addr: Address the first page is mapped at. >>> + * @ptep: Page table pointer for the first entry. >>> + * @nr: Number of entries to mark old and clean. >>> + * >>> + * May be overridden by the architecture; otherwise, implemented by >>> + * get_and_clear/modify/set for each pte in the range. >>> + * >>> + * Note that PTE bits in the PTE range besides the PFN can differ. For example, >>> + * some PTEs might be write-protected. >>> + * >>> + * Context: The caller holds the page table lock.  The PTEs map consecutive >>> + * pages that belong to the same folio.  The PTEs are all in the same PMD. >>> + */ >>> +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr, >>> +                    pte_t *ptep, unsigned int nr) >> >> Just thinking out loud, I wonder if it would be cleaner to convert mkold_ptes() >> (which I added as part of swap-out) to something like: >> >> clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr, >>                pte_t *ptep, unsigned int nr, >>                bool clear_young, bool clear_dirty); >> >> Then we can use the same function for both use cases and also have the ability >> to only clear dirty in future if we ever need it. The other advantage is that we >> only need to plumb a single function down the arm64 arch code. As it currently >> stands, those 2 functions would be duplicating most of their code. > > Yes. Maybe better use proper __bitwise flags, the compiler should be smart > enough to optimize either way. Agreed. I was also thinking perhaps it makes sense to start using output bitwise flags for folio_pte_batch() since this patch set takes us up to 3 optional bool pointers for different things. Might be cleaner to have input flags to tell it what we care about and output flags to highlight those things. I guess the compiler should be able to optimize in the same way.