From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Huang Ying <ying.huang@intel.com>, Gao Xiang <xiang@kernel.org>,
Yu Zhao <yuzhao@google.com>, Yang Shi <shy828301@gmail.com>,
Michal Hocko <mhocko@suse.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Barry Song <21cnbao@gmail.com>, Chris Li <chrisl@kernel.org>,
Lance Yang <ioworker0@gmail.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v6 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()
Date: Mon, 8 Apr 2024 14:27:36 +0100 [thread overview]
Message-ID: <2cfa542a-ae38-4867-a64b-621e7778fdf7@arm.com> (raw)
In-Reply-To: <be096120-4dd1-4a10-b283-779d23c2811b@arm.com>
On 08/04/2024 13:47, Ryan Roberts wrote:
> On 08/04/2024 13:07, Ryan Roberts wrote:
>> [...]
>>>
>>> [...]
>>>
>>>> +
>>>> +/**
>>>> + * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries
>>>> + * @start_ptep: Page table pointer for the first entry.
>>>> + * @max_nr: The maximum number of table entries to consider.
>>>> + * @entry: Swap entry recovered from the first table entry.
>>>> + *
>>>> + * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs
>>>> + * containing swap entries all with consecutive offsets and targeting the same
>>>> + * swap type.
>>>> + *
>>>
>>> Likely you should document that any swp pte bits are ignored? ()
>>
>> Now that I understand what swp pte bits are, I think the simplest thing is to
>> just make this function always consider the pte bits by using pte_same() as you
>> suggest below? I don't think there is ever a case for ignoring the swp pte bits?
>> And then I don't need to do anything special for uffd-wp either (below you
>> suggested not doing batching when the VMA has uffd enabled).
>>
>> Any concerns?
>>
>>>
>>>> + * max_nr must be at least one and must be limited by the caller so scanning
>>>> + * cannot exceed a single page table.
>>>> + *
>>>> + * Return: the number of table entries in the batch.
>>>> + */
>>>> +static inline int swap_pte_batch(pte_t *start_ptep, int max_nr,
>>>> + swp_entry_t entry)
>>>> +{
>>>> + const pte_t *end_ptep = start_ptep + max_nr;
>>>> + unsigned long expected_offset = swp_offset(entry) + 1;
>>>> + unsigned int expected_type = swp_type(entry);
>>>> + pte_t *ptep = start_ptep + 1;
>>>> +
>>>> + VM_WARN_ON(max_nr < 1);
>>>> + VM_WARN_ON(non_swap_entry(entry));
>>>> +
>>>> + while (ptep < end_ptep) {
>>>> + pte_t pte = ptep_get(ptep);
>>>> +
>>>> + if (pte_none(pte) || pte_present(pte))
>>>> + break;
>>>> +
>>>> + entry = pte_to_swp_entry(pte);
>>>> +
>>>> + if (non_swap_entry(entry) ||
>>>> + swp_type(entry) != expected_type ||
>>>> + swp_offset(entry) != expected_offset)
>>>> + break;
>>>> +
>>>> + expected_offset++;
>>>> + ptep++;
>>>> + }
>>>> +
>>>> + return ptep - start_ptep;
>>>> +}
>>>
>>> Looks very clean :)
>>>
>>> I was wondering whether we could similarly construct the expected swp PTE and
>>> only check pte_same.
>>>
>>> expected_pte = __swp_entry_to_pte(__swp_entry(expected_type, expected_offset));
>>
>> So planning to do this.
>
> Of course this clears all the swp pte bits in expected_pte. So need to do something a bit more complex.
>
> If we can safely assume all offset bits are contiguous in every per-arch representation then we can do:
Looks like at least csky and hexagon store the offset in discontiguous regions.
So it will have to be the second approach if we want to avoid anything
arch-specific. I'll assume that for now; we can always specialize
pte_next_swp_offset() per-arch in the future if needed.
>
> static inline pte_t pte_next_swp_offset(pte_t pte)
> {
> pte_t offset_inc = __swp_entry_to_pte(__swp_entry(0, 1));
>
> return __pte(pte_val(pte) + pte_val(offset_inc));
> }
>
> Or if not:
>
> static inline pte_t pte_next_swp_offset(pte_t pte)
> {
> swp_entry_t entry = pte_to_swp_entry(pte);
> pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry), swp_offset(entry) + 1));
>
> if (pte_swp_soft_dirty(pte))
> new = pte_swp_mksoft_dirty(new);
> if (pte_swp_exclusive(pte))
> new = pte_swp_mkexclusive(new);
> if (pte_swp_uffd_wp(pte))
> new = pte_swp_mkuffd_wp(new);
>
> return new;
> }
>
> Then swap_pte_batch() becomes:
>
> static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte)
> {
> pte_t expected_pte = pte_next_swp_offset(pte);
> const pte_t *end_ptep = start_ptep + max_nr;
> pte_t *ptep = start_ptep + 1;
>
> VM_WARN_ON(max_nr < 1);
> VM_WARN_ON(!is_swap_pte(pte));
> VM_WARN_ON(non_swap_entry(pte_to_swp_entry(pte)));
>
> while (ptep < end_ptep) {
> pte = ptep_get(ptep);
>
> if (!pte_same(pte, expected_pte))
> break;
>
> expected_pte = pte_next_swp_offset(expected_pte);
> ptep++;
> }
>
> return ptep - start_ptep;
> }
>
> Would you be happy with either of these? I'll go look if we can assume the offset bits are always contiguous.
>
>
>>
>>>
>>> ... or have a variant to increase only the swp offset for an existing pte. But
>>> non-trivial due to the arch-dependent format.
>>
>> not this - I agree this will be difficult due to per-arch changes. I'd rather
>> just do the generic version and leave the compiler to do the best it can to
>> simplify and optimize.
>>
>>>
>>> But then, we'd fail on mismatch of other swp pte bits.
>>>
>>>
>>> On swapin, when reusing this function (likely!), we'll might to make sure that
>>> the PTE bits match as well.
>>>
>>> See below regarding uffd-wp.
>>>
>>>
>>>> #endif /* CONFIG_MMU */
>>>> void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio,
>>>> diff --git a/mm/madvise.c b/mm/madvise.c
>>>> index 1f77a51baaac..070bedb4996e 100644
>>>> --- a/mm/madvise.c
>>>> +++ b/mm/madvise.c
>>>> @@ -628,6 +628,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned
>>>> long addr,
>>>> struct folio *folio;
>>>> int nr_swap = 0;
>>>> unsigned long next;
>>>> + int nr, max_nr;
>>>> next = pmd_addr_end(addr, end);
>>>> if (pmd_trans_huge(*pmd))
>>>> @@ -640,7 +641,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned
>>>> long addr,
>>>> return 0;
>>>> flush_tlb_batched_pending(mm);
>>>> arch_enter_lazy_mmu_mode();
>>>> - for (; addr != end; pte++, addr += PAGE_SIZE) {
>>>> + for (; addr != end; pte += nr, addr += PAGE_SIZE * nr) {
>>>> + nr = 1;
>>>> ptent = ptep_get(pte);
>>>> if (pte_none(ptent))
>>>> @@ -655,9 +657,11 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned
>>>> long addr,
>>>> entry = pte_to_swp_entry(ptent);
>>>> if (!non_swap_entry(entry)) {
>>>> - nr_swap--;
>>>> - free_swap_and_cache(entry);
>>>> - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
>>>> + max_nr = (end - addr) / PAGE_SIZE;
>>>> + nr = swap_pte_batch(pte, max_nr, entry);
>>>> + nr_swap -= nr;
>>>> + free_swap_and_cache_nr(entry, nr);
>>>> + clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
>>>> } else if (is_hwpoison_entry(entry) ||
>>>> is_poisoned_swp_entry(entry)) {
>>>> pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index 7dc6c3d9fa83..ef2968894718 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -1637,12 +1637,13 @@ static unsigned long zap_pte_range(struct mmu_gather
>>>> *tlb,
>>>> folio_remove_rmap_pte(folio, page, vma);
>>>> folio_put(folio);
>>>> } else if (!non_swap_entry(entry)) {
>>>> - /* Genuine swap entry, hence a private anon page */
>>>> + max_nr = (end - addr) / PAGE_SIZE;
>>>> + nr = swap_pte_batch(pte, max_nr, entry);
>>>> + /* Genuine swap entries, hence a private anon pages */
>>>> if (!should_zap_cows(details))
>>>> continue;
>>>> - rss[MM_SWAPENTS]--;
>>>> - if (unlikely(!free_swap_and_cache(entry)))
>>>> - print_bad_pte(vma, addr, ptent, NULL);
>>>> + rss[MM_SWAPENTS] -= nr;
>>>> + free_swap_and_cache_nr(entry, nr);
>>>> } else if (is_migration_entry(entry)) {
>>>> folio = pfn_swap_entry_folio(entry);
>>>> if (!should_zap_folio(details, folio))
>>>> @@ -1665,8 +1666,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
>>>> pr_alert("unrecognized swap entry 0x%lx\n", entry.val);
>>>> WARN_ON_ONCE(1);
>>>> }
>>>> - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
>>>> - zap_install_uffd_wp_if_needed(vma, addr, pte, 1, details, ptent);
>>>> + clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
>>>
>>> For zap_install_uffd_wp_if_needed(), the uffd-wp bit has to match.
>>>
>>> zap_install_uffd_wp_if_needed() will use the uffd-wp information in
>>> ptent->pteval to make a decision whether to place PTE_MARKER_UFFD_WP markers.
>>>
>>> On mixture, you either lose some or place too many markers.
>>>
>>> A simple workaround would be to disable any such batching if the VMA does have
>>> uffd-wp enabled.
>>
>> Rather than this, I'll just consider all the swp pte bits when batching.
>>
>>>
>>>> + zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details, ptent);
>>>> } while (pte += nr, addr += PAGE_SIZE * nr, addr != end);
>>
>> [...]
>>
>
next prev parent reply other threads:[~2024-04-08 13:27 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-03 11:40 [PATCH v6 0/6] Swap-out mTHP without splitting Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 1/6] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2024-04-03 22:12 ` Chris Li
2024-04-04 7:06 ` Ryan Roberts
2024-04-04 13:43 ` Chris Li
2024-04-08 11:56 ` Ryan Roberts
2024-04-05 9:25 ` David Hildenbrand
2024-04-03 11:40 ` [PATCH v6 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Ryan Roberts
[not found] ` <051052af-3b56-4290-98d3-fd5a1eb11ce1@redhat.com>
2024-04-08 9:22 ` Ryan Roberts
2024-04-08 9:43 ` David Hildenbrand
2024-04-08 10:07 ` Ryan Roberts
[not found] ` <79c5513b-b3f2-4fbb-a3c7-a09894d54d22@redhat.com>
2024-04-08 10:39 ` Ryan Roberts
2024-04-08 12:07 ` Ryan Roberts
2024-04-08 12:47 ` Ryan Roberts
2024-04-08 13:27 ` Ryan Roberts [this message]
2024-04-08 15:13 ` David Hildenbrand
2024-04-03 11:40 ` [PATCH v6 3/6] mm: swap: Simplify struct percpu_cluster Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 4/6] mm: swap: Allow storage of all mTHP orders Ryan Roberts
2024-04-05 10:38 ` David Hildenbrand
2024-04-07 6:02 ` Huang, Ying
2024-04-08 9:24 ` Ryan Roberts
2024-04-08 9:33 ` David Hildenbrand
2024-04-08 9:35 ` Ryan Roberts
2024-04-07 7:38 ` Barry Song
2024-04-08 9:28 ` Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 5/6] mm: vmscan: Avoid split during shrink_folio_list() Ryan Roberts
2024-04-05 10:42 ` David Hildenbrand
2024-04-08 9:31 ` Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Ryan Roberts
2024-04-03 17:17 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2cfa542a-ae38-4867-a64b-621e7778fdf7@arm.com \
--to=ryan.roberts@arm.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=chrisl@kernel.org \
--cc=david@redhat.com \
--cc=ioworker0@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=shy828301@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=xiang@kernel.org \
--cc=ying.huang@intel.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox