From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Yin Fengwei <fengwei.yin@intel.com>, Yu Zhao <yuzhao@google.com>,
Yang Shi <shy828301@gmail.com>,
"Huang, Ying" <ying.huang@intel.com>, Zi Yan <ziy@nvidia.com>,
Nathan Chancellor <nathan@kernel.org>,
Alexander Gordeev <agordeev@linux.ibm.com>,
Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v4 3/3] mm: Batch-zap large anonymous folio PTE mappings
Date: Thu, 3 Aug 2023 15:50:25 +0200 [thread overview]
Message-ID: <a3aba793-5770-cfcd-3dab-91bcbe49c241@redhat.com> (raw)
In-Reply-To: <6cda91b3-bb7a-4c4c-a618-2572b9c8bbf9@redhat.com>
On 03.08.23 15:38, David Hildenbrand wrote:
> On 27.07.23 16:18, Ryan Roberts wrote:
>> This allows batching the rmap removal with folio_remove_rmap_range(),
>> which means we avoid spuriously adding a partially unmapped folio to the
>> deferred split queue in the common case, which reduces split queue lock
>> contention.
>>
>> Previously each page was removed from the rmap individually with
>> page_remove_rmap(). If the first page belonged to a large folio, this
>> would cause page_remove_rmap() to conclude that the folio was now
>> partially mapped and add the folio to the deferred split queue. But
>> subsequent calls would cause the folio to become fully unmapped, meaning
>> there is no value to adding it to the split queue.
>>
>> A complicating factor is that for platforms where MMU_GATHER_NO_GATHER
>> is enabled (e.g. s390), __tlb_remove_page() drops a reference to the
>> page. This means that the folio reference count could drop to zero while
>> still in use (i.e. before folio_remove_rmap_range() is called). This
>> does not happen on other platforms because the actual page freeing is
>> deferred.
>>
>> Solve this by appropriately getting/putting the folio to guarrantee it
>> does not get freed early. Given the need to get/put the folio in the
>> batch path, we stick to the non-batched path if the folio is not large.
>> While the batched path is functionally correct for a folio with 1 page,
>> it is unlikely to be as efficient as the existing non-batched path in
>> this case.
>>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> ---
>> mm/memory.c | 132 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 132 insertions(+)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 01f39e8144ef..d35bd8d2b855 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -1391,6 +1391,99 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
>> pte_install_uffd_wp_if_needed(vma, addr, pte, pteval);
>> }
>>
>> +static inline unsigned long page_cont_mapped_vaddr(struct page *page,
>> + struct page *anchor, unsigned long anchor_vaddr)
>> +{
>> + unsigned long offset;
>> + unsigned long vaddr;
>> +
>> + offset = (page_to_pfn(page) - page_to_pfn(anchor)) << PAGE_SHIFT;
>> + vaddr = anchor_vaddr + offset;
>> +
>> + if (anchor > page) {
>> + if (vaddr > anchor_vaddr)
>> + return 0;
>> + } else {
>> + if (vaddr < anchor_vaddr)
>> + return ULONG_MAX;
>> + }
>> +
>> + return vaddr;
>> +}
>> +
>> +static int folio_nr_pages_cont_mapped(struct folio *folio,
>> + struct page *page, pte_t *pte,
>> + unsigned long addr, unsigned long end)
>> +{
>> + pte_t ptent;
>> + int floops;
>> + int i;
>> + unsigned long pfn;
>> + struct page *folio_end;
>> +
>> + if (!folio_test_large(folio))
>> + return 1;
>> +
>> + folio_end = &folio->page + folio_nr_pages(folio);
>> + end = min(page_cont_mapped_vaddr(folio_end, page, addr), end);
>> + floops = (end - addr) >> PAGE_SHIFT;
>> + pfn = page_to_pfn(page);
>> + pfn++;
>> + pte++;
>> +
>> + for (i = 1; i < floops; i++) {
>> + ptent = ptep_get(pte);
>> +
>> + if (!pte_present(ptent) || pte_pfn(ptent) != pfn)
>> + break;
>> +
>> + pfn++;
>> + pte++;
>> + }
>> +
>> + return i;
>> +}
>> +
>> +static unsigned long try_zap_anon_pte_range(struct mmu_gather *tlb,
>> + struct vm_area_struct *vma,
>> + struct folio *folio,
>> + struct page *page, pte_t *pte,
>> + unsigned long addr, int nr_pages,
>> + struct zap_details *details)
>> +{
>> + struct mm_struct *mm = tlb->mm;
>> + pte_t ptent;
>> + bool full;
>> + int i;
>> +
>> + /* __tlb_remove_page may drop a ref; prevent going to 0 while in use. */
>> + folio_get(folio);
>
> Is there no way around that? It feels wrong and IMHO a bit ugly.
>
> With this patch, you'll might suddenly have mapcount > refcount for a
> folio, or am I wrong?
Thinking about it, Maybe we should really find a way to keep the current
logic flow unmodified:
1) ptep_get_and_clear_full()
2) tlb_remove_tlb_entry()
3) page_remove_rmap()
4) __tlb_remove_page()
For example, one loop to handle 1) and 2); and another one to handle 4).
This will need a way to query for the first loop how often we can call
__tlb_remove_page() before we need a flush.
The simple answer would be "batch->max - batch->nr". tlb_next_batch()
makes exceeding that a bit harder, maybe it's not really required.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2023-08-03 13:50 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-27 14:18 [PATCH v4 0/3] Optimize large folio interaction with deferred split Ryan Roberts
2023-07-27 14:18 ` [PATCH v4 1/3] mm: Allow deferred splitting of arbitrary large anon folios Ryan Roberts
2023-07-27 14:18 ` [PATCH v4 2/3] mm: Implement folio_remove_rmap_range() Ryan Roberts
2023-07-27 14:18 ` [PATCH v4 3/3] mm: Batch-zap large anonymous folio PTE mappings Ryan Roberts
2023-07-27 17:22 ` Yu Zhao
2023-07-28 9:16 ` Ryan Roberts
2023-08-01 7:12 ` Yu Zhao
2023-08-03 13:57 ` David Hildenbrand
2023-08-03 13:38 ` David Hildenbrand
2023-08-03 13:50 ` David Hildenbrand [this message]
2023-08-03 13:56 ` Ryan Roberts
2023-08-03 14:10 ` David Hildenbrand
2023-08-03 14:15 ` Ryan Roberts
2023-08-03 14:21 ` David Hildenbrand
2023-08-03 14:28 ` Zi Yan
2023-08-02 16:42 ` [PATCH v4 0/3] Optimize large folio interaction with deferred split Ryan Roberts
2023-08-02 17:02 ` Yu Zhao
2023-08-03 12:01 ` Kirill A. Shutemov
2023-08-03 12:48 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a3aba793-5770-cfcd-3dab-91bcbe49c241@redhat.com \
--to=david@redhat.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=fengwei.yin@intel.com \
--cc=gerald.schaefer@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nathan@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).