From: Ryan Roberts <ryan.roberts@arm.com>
To: Yu Zhao <yuzhao@google.com>, Hugh Dickins <hughd@google.com>,
Matthew Wilcox <willy@infradead.org>,
Andrew Morton <akpm@linux-foundation.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Yin Fengwei <fengwei.yin@intel.com>,
David Hildenbrand <david@redhat.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Yang Shi <shy828301@gmail.com>,
"Huang, Ying" <ying.huang@intel.com>, Zi Yan <ziy@nvidia.com>,
Luis Chamberlain <mcgrof@kernel.org>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v3 3/4] mm: FLEXIBLE_THP for improved performance
Date: Mon, 17 Jul 2023 14:36:03 +0100 [thread overview]
Message-ID: <5df787a0-8e69-2472-cdd6-f96a3f7dfaaf@arm.com> (raw)
In-Reply-To: <CAOUHufaDfJwF_-zb6zV5COG-KaaGcSyrNmbaEzaWz2UjcGGgHQ@mail.gmail.com>
>>>> +static int alloc_anon_folio(struct vm_fault *vmf, struct folio **folio)
>>>> +{
>>>> + int i;
>>>> + gfp_t gfp;
>>>> + pte_t *pte;
>>>> + unsigned long addr;
>>>> + struct vm_area_struct *vma = vmf->vma;
>>>> + int prefer = anon_folio_order(vma);
>>>> + int orders[] = {
>>>> + prefer,
>>>> + prefer > PAGE_ALLOC_COSTLY_ORDER ? PAGE_ALLOC_COSTLY_ORDER : 0,
>>>> + 0,
>>>> + };
>>>> +
>>>> + *folio = NULL;
>>>> +
>>>> + if (vmf_orig_pte_uffd_wp(vmf))
>>>> + goto fallback;
>>>> +
>>>> + for (i = 0; orders[i]; i++) {
>>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]);
>>>> + if (addr >= vma->vm_start &&
>>>> + addr + (PAGE_SIZE << orders[i]) <= vma->vm_end)
>>>> + break;
>>>> + }
>>>> +
>>>> + if (!orders[i])
>>>> + goto fallback;
>>>> +
>>>> + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK);
>>>> + if (!pte)
>>>> + return -EAGAIN;
>>>
>>> It would be a bug if this happens. So probably -EINVAL?
>>
>> Not sure what you mean? Hugh Dickins' series that went into v6.5-rc1 makes it
>> possible for pte_offset_map() to fail (if I understood correctly) and we have to
>> handle this. The intent is that we will return from the fault without making any
>> change, then we will refault and try again.
>
> Thanks for checking that -- it's very relevant. One detail is that
> that series doesn't affect anon. IOW, collapsing PTEs into a PMD can't
> happen while we are holding mmap_lock for read here, and therefore,
> the race that could cause pte_offset_map() on shmem/file PTEs to fail
> doesn't apply here.
But Hugh's patches have changed do_anonymous_page() to handle failure from
pte_offset_map_lock(). So I was just following that pattern. If this really
can't happen, then I'd rather WARN/BUG on it, and simplify alloc_anon_folio()'s
prototype to just return a `struct folio *` (and if it's null that means ENOMEM).
Hugh, perhaps you can comment?
As an aside, it was my understanding from LWN, that we are now using a per-VMA
lock so presumably we don't hold mmap_lock for read here? Or perhaps that only
applies to file-backed memory?
>
> +Hugh Dickins for further consultation if you need it.
>
>>>> +
>>>> + for (; orders[i]; i++) {
>>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]);
>>>> + vmf->pte = pte + pte_index(addr);
>>>> + if (!vmf_pte_range_changed(vmf, 1 << orders[i]))
>>>> + break;
>>>> + }
>>>> +
>>>> + vmf->pte = NULL;
>>>> + pte_unmap(pte);
>>>> +
>>>> + gfp = vma_thp_gfp_mask(vma);
>>>> +
>>>> + for (; orders[i]; i++) {
>>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]);
>>>> + *folio = vma_alloc_folio(gfp, orders[i], vma, addr, true);
>>>> + if (*folio) {
>>>> + clear_huge_page(&(*folio)->page, addr, 1 << orders[i]);
>>>> + return 0;
>>>> + }
>>>> + }
>>>> +
>>>> +fallback:
>>>> + *folio = vma_alloc_zeroed_movable_folio(vma, vmf->address);
>>>> + return *folio ? 0 : -ENOMEM;
>>>> +}
>>>> +#else
>>>> +static inline int alloc_anon_folio(struct vm_fault *vmf, struct folio **folio)
>>>
>>> Drop "inline" (it doesn't do anything in .c).
>>
>> There are 38 instances of inline in memory.c alone, so looks like a well used
>> convention, even if the compiler may choose to ignore. Perhaps you can educate
>> me; what's the benefit of dropping it?
>
> I'll let Willy and Andrew educate both of us :)
>
> +Matthew Wilcox +Andrew Morton please. Thank you.
>
>>> The rest looks good to me.
>>
>> Great - just incase it wasn't obvious, I decided not to overwrite vmf->address
>> with the aligned version, as you suggested
>
> Yes, I've noticed. Not overwriting has its own merits for sure.
>
>> for 2 reasons; 1) address is const
>> in the struct, so would have had to change that. 2) there is a uffd path that
>> can be taken after the vmf->address fixup would have occured and the path
>> consumes that member, so it would have had to be un-fixed-up making it more
>> messy than the way I opted for.
>>
>> Thanks for the quick review as always!
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2023-07-17 13:36 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-14 16:04 [PATCH v3 0/4] variable-order, large folios for anonymous memory Ryan Roberts
2023-07-14 16:17 ` [PATCH v3 1/4] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() Ryan Roberts
2023-07-14 16:52 ` Yu Zhao
2023-07-14 18:01 ` Ryan Roberts
2023-07-17 13:00 ` David Hildenbrand
2023-07-17 13:13 ` Ryan Roberts
2023-07-17 13:19 ` David Hildenbrand
2023-07-17 13:21 ` Ryan Roberts
2023-07-14 16:17 ` [PATCH v3 2/4] mm: Default implementation of arch_wants_pte_order() Ryan Roberts
2023-07-14 16:54 ` Yu Zhao
2023-07-17 11:13 ` Yin Fengwei
2023-07-17 13:01 ` David Hildenbrand
2023-07-17 13:15 ` Ryan Roberts
2023-07-14 16:17 ` [PATCH v3 3/4] mm: FLEXIBLE_THP for improved performance Ryan Roberts
2023-07-14 17:17 ` Yu Zhao
2023-07-14 17:59 ` Ryan Roberts
2023-07-14 22:11 ` Yu Zhao
2023-07-17 13:36 ` Ryan Roberts [this message]
2023-07-17 19:31 ` Yu Zhao
2023-07-17 20:35 ` Yu Zhao
2023-07-17 23:37 ` Hugh Dickins
2023-07-18 10:36 ` Ryan Roberts
2023-07-17 13:06 ` David Hildenbrand
2023-07-17 13:20 ` Ryan Roberts
2023-07-17 13:56 ` David Hildenbrand
2023-07-17 14:47 ` Ryan Roberts
2023-07-17 14:55 ` David Hildenbrand
2023-07-17 17:07 ` Yu Zhao
2023-07-17 17:16 ` David Hildenbrand
2023-07-21 10:57 ` Ryan Roberts
2023-07-14 16:17 ` [PATCH v3 4/4] arm64: mm: Override arch_wants_pte_order() Ryan Roberts
2023-07-14 16:47 ` Yu Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5df787a0-8e69-2472-cdd6-f96a3f7dfaaf@arm.com \
--to=ryan.roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=catalin.marinas@arm.com \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=hughd@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mcgrof@kernel.org \
--cc=shy828301@gmail.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox