public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	Huang Ying <ying.huang@intel.com>, Gao Xiang <xiang@kernel.org>,
	Yu Zhao <yuzhao@google.com>, Yang Shi <shy828301@gmail.com>,
	Michal Hocko <mhocko@suse.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	Barry Song <21cnbao@gmail.com>, Chris Li <chrisl@kernel.org>,
	Lance Yang <ioworker0@gmail.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v6 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()
Date: Mon, 8 Apr 2024 17:13:29 +0200	[thread overview]
Message-ID: <6ea2d90c-8a94-4361-af71-1a990bbe7096@redhat.com> (raw)
In-Reply-To: <2cfa542a-ae38-4867-a64b-621e7778fdf7@arm.com>

On 08.04.24 15:27, Ryan Roberts wrote:
> On 08/04/2024 13:47, Ryan Roberts wrote:
>> On 08/04/2024 13:07, Ryan Roberts wrote:
>>> [...]
>>>>
>>>> [...]
>>>>
>>>>> +
>>>>> +/**
>>>>> + * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries
>>>>> + * @start_ptep: Page table pointer for the first entry.
>>>>> + * @max_nr: The maximum number of table entries to consider.
>>>>> + * @entry: Swap entry recovered from the first table entry.
>>>>> + *
>>>>> + * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs
>>>>> + * containing swap entries all with consecutive offsets and targeting the same
>>>>> + * swap type.
>>>>> + *
>>>>
>>>> Likely you should document that any swp pte bits are ignored? ()
>>>
>>> Now that I understand what swp pte bits are, I think the simplest thing is to
>>> just make this function always consider the pte bits by using pte_same() as you
>>> suggest below? I don't think there is ever a case for ignoring the swp pte bits?
>>> And then I don't need to do anything special for uffd-wp either (below you
>>> suggested not doing batching when the VMA has uffd enabled).
>>>
>>> Any concerns?
>>>
>>>>
>>>>> + * max_nr must be at least one and must be limited by the caller so scanning
>>>>> + * cannot exceed a single page table.
>>>>> + *
>>>>> + * Return: the number of table entries in the batch.
>>>>> + */
>>>>> +static inline int swap_pte_batch(pte_t *start_ptep, int max_nr,
>>>>> +                 swp_entry_t entry)
>>>>> +{
>>>>> +    const pte_t *end_ptep = start_ptep + max_nr;
>>>>> +    unsigned long expected_offset = swp_offset(entry) + 1;
>>>>> +    unsigned int expected_type = swp_type(entry);
>>>>> +    pte_t *ptep = start_ptep + 1;
>>>>> +
>>>>> +    VM_WARN_ON(max_nr < 1);
>>>>> +    VM_WARN_ON(non_swap_entry(entry));
>>>>> +
>>>>> +    while (ptep < end_ptep) {
>>>>> +        pte_t pte = ptep_get(ptep);
>>>>> +
>>>>> +        if (pte_none(pte) || pte_present(pte))
>>>>> +            break;
>>>>> +
>>>>> +        entry = pte_to_swp_entry(pte);
>>>>> +
>>>>> +        if (non_swap_entry(entry) ||
>>>>> +            swp_type(entry) != expected_type ||
>>>>> +            swp_offset(entry) != expected_offset)
>>>>> +            break;
>>>>> +
>>>>> +        expected_offset++;
>>>>> +        ptep++;
>>>>> +    }
>>>>> +
>>>>> +    return ptep - start_ptep;
>>>>> +}
>>>>
>>>> Looks very clean :)
>>>>
>>>> I was wondering whether we could similarly construct the expected swp PTE and
>>>> only check pte_same.
>>>>
>>>> expected_pte = __swp_entry_to_pte(__swp_entry(expected_type, expected_offset));
>>>
>>> So planning to do this.
>>
>> Of course this clears all the swp pte bits in expected_pte. So need to do something a bit more complex.
>>
>> If we can safely assume all offset bits are contiguous in every per-arch representation then we can do:
> 
> Looks like at least csky and hexagon store the offset in discontiguous regions.
> So it will have to be the second approach if we want to avoid anything
> arch-specific. I'll assume that for now; we can always specialize
> pte_next_swp_offset() per-arch in the future if needed.

Sounds good. Just have a generic variant as you proposed, and add the 
per-arch one if really required later.

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2024-04-08 15:13 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-03 11:40 [PATCH v6 0/6] Swap-out mTHP without splitting Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 1/6] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2024-04-03 22:12   ` Chris Li
2024-04-04  7:06     ` Ryan Roberts
2024-04-04 13:43       ` Chris Li
2024-04-08 11:56         ` Ryan Roberts
2024-04-05  9:25   ` David Hildenbrand
2024-04-03 11:40 ` [PATCH v6 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Ryan Roberts
     [not found]   ` <051052af-3b56-4290-98d3-fd5a1eb11ce1@redhat.com>
2024-04-08  9:22     ` Ryan Roberts
2024-04-08  9:43       ` David Hildenbrand
2024-04-08 10:07         ` Ryan Roberts
     [not found]           ` <79c5513b-b3f2-4fbb-a3c7-a09894d54d22@redhat.com>
2024-04-08 10:39             ` Ryan Roberts
2024-04-08 12:07     ` Ryan Roberts
2024-04-08 12:47       ` Ryan Roberts
2024-04-08 13:27         ` Ryan Roberts
2024-04-08 15:13           ` David Hildenbrand [this message]
2024-04-03 11:40 ` [PATCH v6 3/6] mm: swap: Simplify struct percpu_cluster Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 4/6] mm: swap: Allow storage of all mTHP orders Ryan Roberts
2024-04-05 10:38   ` David Hildenbrand
2024-04-07  6:02     ` Huang, Ying
2024-04-08  9:24       ` Ryan Roberts
2024-04-08  9:33         ` David Hildenbrand
2024-04-08  9:35           ` Ryan Roberts
2024-04-07  7:38   ` Barry Song
2024-04-08  9:28     ` Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 5/6] mm: vmscan: Avoid split during shrink_folio_list() Ryan Roberts
2024-04-05 10:42   ` David Hildenbrand
2024-04-08  9:31     ` Ryan Roberts
2024-04-03 11:40 ` [PATCH v6 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Ryan Roberts
2024-04-03 17:17   ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6ea2d90c-8a94-4361-af71-1a990bbe7096@redhat.com \
    --to=david@redhat.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chrisl@kernel.org \
    --cc=ioworker0@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=xiang@kernel.org \
    --cc=ying.huang@intel.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox