From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Huang Ying <ying.huang@intel.com>, Gao Xiang <xiang@kernel.org>,
Yu Zhao <yuzhao@google.com>, Yang Shi <shy828301@gmail.com>,
Michal Hocko <mhocko@suse.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags
Date: Mon, 4 Mar 2024 16:03:53 +0000 [thread overview]
Message-ID: <b642c7ff-c452-4066-ac12-dbf05e215cb9@arm.com> (raw)
In-Reply-To: <d2fbfdd0-ad61-4fe2-a976-4dac7427bfc9@redhat.com>
On 28/02/2024 12:12, David Hildenbrand wrote:
>>> How relevant is it? Relevant enough that someone decided to put that
>>> optimization in? I don't know :)
>>
>> I'll have one last go at convincing you: Huang Ying (original author) commented
>> "I believe this should be OK. Better to compare the performance too." at [1].
>> That implies to me that perhaps the optimization wasn't in response to a
>> specific problem after all. Do you have any thoughts, Huang?
>
> Might make sense to include that in the patch description!
>
>> OK so if we really do need to keep this optimization, here are some ideas:
>>
>> Fundamentally, we would like to be able to figure out the size of the swap slot
>> from the swap entry. Today swap supports 2 sizes; PAGE_SIZE and PMD_SIZE. For
>> PMD_SIZE, it always uses a full cluster, so can easily add a flag to the cluster
>> to mark it as PMD_SIZE.
>>
>> Going forwards, we want to support all sizes (power-of-2). Most of the time, a
>> cluster will contain only one size of THPs, but this is not the case when a THP
>> in the swapcache gets split or when an order-0 slot gets stolen. We expect these
>> cases to be rare.
>>
>> 1) Keep the size of the smallest swap entry in the cluster header. Most of the
>> time it will be the full size of the swap entry, but sometimes it will cover
>> only a portion. In the latter case you may see a false negative for
>> swap_page_trans_huge_swapped() meaning we take the slow path, but that is rare.
>> There is one wrinkle: currently the HUGE flag is cleared in put_swap_folio(). We
>> wouldn't want to do the equivalent in the new scheme (i.e. set the whole cluster
>> to order-0). I think that is safe, but haven't completely convinced myself yet.
>>
>> 2) allocate 4 bits per (small) swap slot to hold the order. This will give
>> precise information and is conceptually simpler to understand, but will cost
>> more memory (half as much as the initial swap_map[] again).
>>
>> I still prefer to avoid this at all if we can (and would like to hear Huang's
>> thoughts). But if its a choice between 1 and 2, I prefer 1 - I'll do some
>> prototyping.
>
> Taking a step back: what about we simply batch unmapping of swap entries?
>
> That is, if we're unmapping a PTE range, we'll collect swap entries (under PT
> lock) that reference consecutive swap offsets in the same swap file.
>
> There, we can then first decrement all the swap counts, and then try minimizing
> how often we actually have to try reclaiming swap space (lookup folio, see it's
> a large folio that we cannot reclaim or could reclaim, ...).
>
> Might need some fine-tuning in swap code to "advance" to the next entry to try
> freeing up, but we certainly can do better than what we would do right now.
>
Hi,
I'm struggling to convince myself that free_swap_and_cache() can't race with
with swapoff(). Can anyone explain that this is safe?
I *think* they are both serialized by the PTL, since all callers of
free_swap_and_cache() (except shmem) have the PTL, and swapoff() calls
try_to_unuse() early on, which takes the PTL as it iterates over every vma in
every mm. It looks like shmem is handled specially by a call to shmem_unuse(),
but I can't see the exact serialization mechanism.
I've implemented a batching function, as David suggested above, but I'm trying
to convince myself that it is safe for it to access si->swap_map[] without a
lock (i.e. that swapoff() can't concurrently free it). But I think
free_swap_and_cache() as it already exists depends on being able to access the
si without an explicit lock, so I'm assuming the same mechanism will protect my
new changes. But I want to be sure I understand the mechanism...
This is the existing free_swap_and_cache(). I think _swap_info_get() would break
if this could race with swapoff(), and __swap_entry_free() looks up the cluster
from an array, which would also be freed by swapoff if racing:
int free_swap_and_cache(swp_entry_t entry)
{
struct swap_info_struct *p;
unsigned char count;
if (non_swap_entry(entry))
return 1;
p = _swap_info_get(entry);
if (p) {
count = __swap_entry_free(p, entry);
if (count == SWAP_HAS_CACHE)
__try_to_reclaim_swap(p, swp_offset(entry),
TTRS_UNMAPPED | TTRS_FULL);
}
return p != NULL;
}
This is my new function. I want to be sure that it's safe to do the
READ_ONCE(si->swap_info[...]):
void free_swap_and_cache_nr(swp_entry_t entry, int nr)
{
unsigned long end = swp_offset(entry) + nr;
unsigned type = swp_type(entry);
struct swap_info_struct *si;
unsigned long offset;
if (non_swap_entry(entry))
return;
si = _swap_info_get(entry);
if (!si || end > si->max)
return;
/*
* First free all entries in the range.
*/
for (offset = swp_offset(entry); offset < end; offset++) {
VM_WARN_ON(data_race(!si->swap_map[offset]));
__swap_entry_free(si, swp_entry(type, offset));
}
/*
* Now go back over the range trying to reclaim the swap cache. This is
* more efficient for large folios because we will only try to reclaim
* the swap once per folio in the common case. If we do
* __swap_entry_free() and __try_to_reclaim_swap() in the same loop, the
* latter will get a reference and lock the folio for every individual
* page but will only succeed once the swap slot for every subpage is
* zero.
*/
for (offset = swp_offset(entry); offset < end; offset += nr) {
nr = 1;
if (READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) { << HERE
/*
* Folios are always naturally aligned in swap so
* advance forward to the next boundary. Zero means no
* folio was found for the swap entry, so advance by 1
* in this case. Negative value means folio was found
* but could not be reclaimed. Here we can still advance
* to the next boundary.
*/
nr = __try_to_reclaim_swap(si, offset,
TTRS_UNMAPPED | TTRS_FULL);
if (nr == 0)
nr = 1;
else if (nr < 0)
nr = -nr;
nr = ALIGN(offset + 1, nr) - offset;
}
}
}
Thanks,
Ryan
next prev parent reply other threads:[~2024-03-04 16:04 UTC|newest]
Thread overview: 116+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-25 14:45 [PATCH v3 0/4] Swap-out small-sized THP without splitting Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2024-02-22 10:19 ` David Hildenbrand
2024-02-22 10:20 ` David Hildenbrand
2024-02-26 17:41 ` Ryan Roberts
2024-02-27 17:10 ` Ryan Roberts
2024-02-27 19:17 ` David Hildenbrand
2024-02-28 9:37 ` Ryan Roberts
2024-02-28 12:12 ` David Hildenbrand
2024-02-28 14:57 ` Ryan Roberts
2024-02-28 15:12 ` David Hildenbrand
2024-02-28 15:18 ` Ryan Roberts
2024-03-01 16:27 ` Ryan Roberts
2024-03-01 16:31 ` Matthew Wilcox
2024-03-01 16:44 ` Ryan Roberts
2024-03-01 17:00 ` David Hildenbrand
2024-03-01 17:14 ` Ryan Roberts
2024-03-01 17:18 ` David Hildenbrand
2024-03-01 17:06 ` Ryan Roberts
2024-03-04 4:52 ` Barry Song
2024-03-04 5:42 ` Barry Song
2024-03-05 7:41 ` Ryan Roberts
2024-03-01 16:31 ` Ryan Roberts
2024-03-01 16:32 ` David Hildenbrand
2024-03-04 16:03 ` Ryan Roberts [this message]
2024-03-04 17:30 ` David Hildenbrand
2024-03-04 18:38 ` Ryan Roberts
2024-03-04 20:50 ` David Hildenbrand
2024-03-04 21:55 ` Ryan Roberts
2024-03-04 22:02 ` David Hildenbrand
2024-03-04 22:34 ` Ryan Roberts
2024-03-05 6:11 ` Huang, Ying
2024-03-05 8:35 ` David Hildenbrand
2024-03-05 8:46 ` Ryan Roberts
2024-02-28 13:33 ` Matthew Wilcox
2024-02-28 14:24 ` Ryan Roberts
2024-02-28 14:59 ` Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 2/4] mm: swap: Remove struct percpu_cluster Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 3/4] mm: swap: Simplify ssd behavior when scanner steals entry Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 4/4] mm: swap: Swap-out small-sized THP without splitting Ryan Roberts
2023-10-30 8:18 ` Huang, Ying
2023-10-30 13:59 ` Ryan Roberts
2023-10-31 8:12 ` Huang, Ying
2023-11-03 11:42 ` Ryan Roberts
2023-11-02 7:40 ` Barry Song
2023-11-02 10:21 ` Ryan Roberts
2023-11-02 22:36 ` Barry Song
2023-11-03 11:31 ` Ryan Roberts
2023-11-03 13:57 ` Steven Price
2023-11-04 9:34 ` Barry Song
2023-11-06 10:12 ` Steven Price
2023-11-06 21:39 ` Barry Song
2023-11-08 11:51 ` Steven Price
2023-11-07 12:46 ` Ryan Roberts
2023-11-07 18:05 ` Barry Song
2023-11-08 11:23 ` Barry Song
2023-11-08 20:20 ` Ryan Roberts
2023-11-08 21:04 ` Barry Song
2023-11-04 5:49 ` Barry Song
2024-02-05 9:51 ` Barry Song
2024-02-05 12:14 ` Ryan Roberts
2024-02-18 23:40 ` Barry Song
2024-02-20 20:03 ` Ryan Roberts
2024-03-05 9:00 ` Ryan Roberts
2024-03-05 9:54 ` Barry Song
2024-03-05 10:44 ` Ryan Roberts
2024-02-27 12:28 ` Ryan Roberts
2024-02-27 13:37 ` Ryan Roberts
2024-02-28 2:46 ` Barry Song
2024-02-22 7:05 ` Barry Song
2024-02-22 10:09 ` David Hildenbrand
2024-02-23 9:46 ` Barry Song
2024-02-27 12:05 ` Ryan Roberts
2024-02-28 1:23 ` Barry Song
2024-02-28 9:34 ` David Hildenbrand
2024-02-28 23:18 ` Barry Song
2024-02-28 15:57 ` Ryan Roberts
2023-11-29 7:47 ` [PATCH v3 0/4] " Barry Song
2023-11-29 12:06 ` Ryan Roberts
2023-11-29 20:38 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 0/6] mm: support large folios swap-in Barry Song
2024-01-18 11:10 ` [PATCH RFC 1/6] arm64: mm: swap: support THP_SWAP on hardware with MTE Barry Song
2024-01-26 23:14 ` Chris Li
2024-02-26 2:59 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 2/6] mm: swap: introduce swap_nr_free() for batched swap_free() Barry Song
2024-01-26 23:17 ` Chris Li
2024-02-26 4:47 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 3/6] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
2024-01-26 23:22 ` Chris Li
2024-01-18 11:10 ` [PATCH RFC 4/6] mm: support large folios swapin as a whole Barry Song
2024-01-27 19:53 ` Chris Li
2024-02-26 7:29 ` Barry Song
2024-01-27 20:06 ` Chris Li
2024-02-26 7:31 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 5/6] mm: rmap: weaken the WARN_ON in __folio_add_anon_rmap() Barry Song
2024-01-18 11:54 ` David Hildenbrand
2024-01-23 6:49 ` Barry Song
2024-01-29 3:25 ` Chris Li
2024-01-29 10:06 ` David Hildenbrand
2024-01-29 16:31 ` Chris Li
2024-02-26 5:05 ` Barry Song
2024-04-06 23:27 ` Barry Song
2024-01-27 23:41 ` Chris Li
2024-01-18 11:10 ` [PATCH RFC 6/6] mm: madvise: don't split mTHP for MADV_PAGEOUT Barry Song
2024-01-29 2:15 ` Chris Li
2024-02-26 6:39 ` Barry Song
2024-02-27 12:22 ` Ryan Roberts
2024-02-27 22:39 ` Barry Song
2024-02-27 14:40 ` Ryan Roberts
2024-02-27 18:57 ` Barry Song
2024-02-28 3:49 ` Barry Song
2024-01-18 15:25 ` [PATCH RFC 0/6] mm: support large folios swap-in Ryan Roberts
2024-01-18 23:54 ` Barry Song
2024-01-19 13:25 ` Ryan Roberts
2024-01-27 14:27 ` Barry Song
2024-01-29 9:05 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b642c7ff-c452-4066-ac12-dbf05e215cb9@arm.com \
--to=ryan.roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=shy828301@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=xiang@kernel.org \
--cc=ying.huang@intel.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).