From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Yin Fengwei <fengwei.yin@intel.com>,
Michal Hocko <mhocko@suse.com>, Will Deacon <will@kernel.org>,
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
Nick Piggin <npiggin@gmail.com>,
Peter Zijlstra <peterz@infradead.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Christophe Leroy <christophe.leroy@csgroup.eu>,
"Naveen N. Rao" <naveen.n.rao@linux.ibm.com>,
Heiko Carstens <hca@linux.ibm.com>,
Vasily Gorbik <gor@linux.ibm.com>,
Alexander Gordeev <agordeev@linux.ibm.com>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Sven Schnelle <svens@linux.ibm.com>,
Arnd Bergmann <arnd@arndb.de>,
linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
linux-s390@vger.kernel.org
Subject: Re: [PATCH v2 09/10] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing
Date: Mon, 12 Feb 2024 11:21:52 +0000 [thread overview]
Message-ID: <398991e6-d09d-4f47-a110-4ff1e8356b6e@arm.com> (raw)
In-Reply-To: <66ca6c58-1983-494f-b920-140be736f1d8@redhat.com>
On 12/02/2024 11:05, David Hildenbrand wrote:
> On 12.02.24 11:56, David Hildenbrand wrote:
>> On 12.02.24 11:32, Ryan Roberts wrote:
>>> On 12/02/2024 10:11, David Hildenbrand wrote:
>>>> Hi Ryan,
>>>>
>>>>>> -static void tlb_batch_pages_flush(struct mmu_gather *tlb)
>>>>>> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch)
>>>>>> {
>>>>>> - struct mmu_gather_batch *batch;
>>>>>> -
>>>>>> - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
>>>>>> - struct encoded_page **pages = batch->encoded_pages;
>>>>>> + struct encoded_page **pages = batch->encoded_pages;
>>>>>> + unsigned int nr, nr_pages;
>>>>>> + /*
>>>>>> + * We might end up freeing a lot of pages. Reschedule on a regular
>>>>>> + * basis to avoid soft lockups in configurations without full
>>>>>> + * preemption enabled. The magic number of 512 folios seems to work.
>>>>>> + */
>>>>>> + if (!page_poisoning_enabled_static() && !want_init_on_free()) {
>>>>>
>>>>> Is the performance win really worth 2 separate implementations keyed off this?
>>>>> It seems a bit fragile, in case any other operations get added to free
>>>>> which are
>>>>> proportional to size in future. Why not just always do the conservative
>>>>> version?
>>>>
>>>> I really don't want to iterate over all entries on the "sane" common case. We
>>>> already do that two times:
>>>>
>>>> a) free_pages_and_swap_cache()
>>>>
>>>> b) release_pages()
>>>>
>>>> Only the latter really is required, and I'm planning on removing the one in (a)
>>>> to move it into (b) as well.
>>>>
>>>> So I keep it separate to keep any unnecessary overhead to the setups that are
>>>> already terribly slow.
>>>>
>>>> No need to iterate a page full of entries if it can be easily avoided.
>>>> Especially, no need to degrade the common order-0 case.
>>>
>>> Yeah, I understand all that. But given this is all coming from an array, (so
>>> easy to prefetch?) and will presumably all fit in the cache for the common case,
>>> at least, so its hot for (a) and (b), does separating this out really make a
>>> measurable performance difference? If yes then absolutely this optimizaiton
>>> makes sense. But if not, I think its a bit questionable.
>>
>> I primarily added it because
>>
>> (a) we learned that each cycle counts during mmap() just like it does
>> during fork().
>>
>> (b) Linus was similarly concerned about optimizing out another batching
>> walk in c47454823bd4 ("mm: mmu_gather: allow more than one batch of
>> delayed rmaps"):
>>
>> "it needs to walk that array of pages while still holding the page table
>> lock, and our mmu_gather infrastructure allows for batching quite a lot
>> of pages. We may have thousands on pages queued up for freeing, and we
>> wanted to walk only the last batch if we then added a dirty page to the
>> queue."
>>
>> So if it matters enough for reducing the time we hold the page table
>> lock, it surely adds "some" overhead in general.
>>
>>
>>>
>>> You're the boss though, so if your experience tells you this is neccessary, then
>>> I'm ok with that.
>>
>> I did not do any measurements myself, I just did that intuitively as
>> above. After all, it's all pretty straight forward (keeping the existing
>> logic, we need a new one either way) and not that much code.
>>
>> So unless there are strong opinions, I'd just leave the common case as
>> it was, and the odd case be special.
>
> I think we can just reduce the code duplication easily:
>
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index d175c0f1e2c8..99b3e9408aa0 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -91,18 +91,21 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct
> vm_area_struct *vma)
> }
> #endif
>
> -static void tlb_batch_pages_flush(struct mmu_gather *tlb)
> -{
> - struct mmu_gather_batch *batch;
> +/*
> + * We might end up freeing a lot of pages. Reschedule on a regular
> + * basis to avoid soft lockups in configurations without full
> + * preemption enabled. The magic number of 512 folios seems to work.
> + */
> +#define MAX_NR_FOLIOS_PER_FREE 512
>
> - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
> - struct encoded_page **pages = batch->encoded_pages;
> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch)
> +{
> + struct encoded_page **pages = batch->encoded_pages;
> + unsigned int nr, nr_pages;
>
> - while (batch->nr) {
> - /*
> - * limit free batch count when PAGE_SIZE > 4K
> - */
> - unsigned int nr = min(512U, batch->nr);
> + while (batch->nr) {
> + if (!page_poisoning_enabled_static() && !want_init_on_free()) {
> + nr = min(MAX_NR_FOLIOS_PER_FREE, batch->nr);
>
> /*
> * Make sure we cover page + nr_pages, and don't leave
> @@ -111,14 +114,39 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
> if (unlikely(encoded_page_flags(pages[nr - 1]) &
> ENCODED_PAGE_BIT_NR_PAGES_NEXT))
> nr++;
> + } else {
> + /*
> + * With page poisoning and init_on_free, the time it
> + * takes to free memory grows proportionally with the
> + * actual memory size. Therefore, limit based on the
> + * actual memory size and not the number of involved
> + * folios.
> + */
> + for (nr = 0, nr_pages = 0;
> + nr < batch->nr && nr_pages < MAX_NR_FOLIOS_PER_FREE;
> + nr++) {
> + if (unlikely(encoded_page_flags(pages[nr]) &
> + ENCODED_PAGE_BIT_NR_PAGES_NEXT))
> + nr_pages += encoded_nr_pages(pages[++nr]);
> + else
> + nr_pages++;
> + }
> + }
>
> - free_pages_and_swap_cache(pages, nr);
> - pages += nr;
> - batch->nr -= nr;
> + free_pages_and_swap_cache(pages, nr);
> + pages += nr;
> + batch->nr -= nr;
>
> - cond_resched();
> - }
> + cond_resched();
> }
> +}
> +
> +static void tlb_batch_pages_flush(struct mmu_gather *tlb)
> +{
> + struct mmu_gather_batch *batch;
> +
> + for (batch = &tlb->local; batch && batch->nr; batch = batch->next)
> + __tlb_batch_free_encoded_pages(batch);
> tlb->active = &tlb->local;
> }
>
Yes this is much cleaner IMHO! I don't think putting the poison and init_on_free
checks inside the while loops should make a whole lot of difference - you're
only going round that loop once in the common (4K pages) case.
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
next prev parent reply other threads:[~2024-02-12 11:21 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-09 22:14 [PATCH v2 00/10] mm/memory: optimize unmap/zap with PTE-mapped THP David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 01/10] mm/memory: factor out zapping of present pte into zap_present_pte() David Hildenbrand
2024-02-12 8:37 ` Ryan Roberts
2024-02-09 22:15 ` [PATCH v2 02/10] mm/memory: handle !page case in zap_present_pte() separately David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 03/10] mm/memory: further separate anon and pagecache folio handling in zap_present_pte() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 04/10] mm/memory: factor out zapping folio pte into zap_present_folio_pte() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 05/10] mm/mmu_gather: pass "delay_rmap" instead of encoded page to __tlb_remove_page_size() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 06/10] mm/mmu_gather: define ENCODED_PAGE_FLAG_DELAY_RMAP David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 07/10] mm/mmu_gather: add tlb_remove_tlb_entries() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 08/10] mm/mmu_gather: add __tlb_remove_folio_pages() David Hildenbrand
2024-02-12 8:51 ` Ryan Roberts
2024-02-12 9:03 ` David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 09/10] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing David Hildenbrand
2024-02-12 9:26 ` Ryan Roberts
2024-02-12 10:11 ` David Hildenbrand
2024-02-12 10:32 ` Ryan Roberts
2024-02-12 10:56 ` David Hildenbrand
2024-02-12 11:05 ` David Hildenbrand
2024-02-12 11:21 ` Ryan Roberts [this message]
2024-02-12 11:39 ` David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 10/10] mm/memory: optimize unmap/zap with PTE-mapped THP David Hildenbrand
2024-02-12 9:37 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=398991e6-d09d-4f47-a110-4ff1e8356b6e@arm.com \
--to=ryan.roberts@arm.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=arnd@arndb.de \
--cc=borntraeger@linux.ibm.com \
--cc=catalin.marinas@arm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mhocko@suse.com \
--cc=mpe@ellerman.id.au \
--cc=naveen.n.rao@linux.ibm.com \
--cc=npiggin@gmail.com \
--cc=peterz@infradead.org \
--cc=svens@linux.ibm.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox