From: Mike Kravetz <mike.kravetz@oracle.com>
To: Matthew Wilcox <willy@infradead.org>, Jann Horn <jannh@google.com>
Cc: Linux-MM <linux-mm@kvack.org>,
kernel list <linux-kernel@vger.kernel.org>,
Youquan Song <youquan.song@intel.com>,
Andrea Arcangeli <aarcange@redhat.com>, Jan Kara <jack@suse.cz>,
John Hubbard <jhubbard@nvidia.com>,
"Kirill A. Shutemov" <kirill@shutemov.name>
Subject: Re: page refcount race between prep_compound_gigantic_page() and __page_cache_add_speculative()?
Date: Tue, 15 Jun 2021 11:27:14 -0700 [thread overview]
Message-ID: <41f698d6-a099-105d-e922-170fbb3e1798@oracle.com> (raw)
In-Reply-To: <YMifvD723USsnWRH@casper.infradead.org>
On 6/15/21 5:40 AM, Matthew Wilcox wrote:
> On Tue, Jun 15, 2021 at 01:03:53PM +0200, Jann Horn wrote:
>> The messier path, as the original commit describes, is "gigantic" page
>> allocation. In that case, we'll go through the following path (if we
>> ignore CMA):
>>
>> alloc_fresh_huge_page():
>> alloc_gigantic_page()
>> alloc_contig_pages()
>> __alloc_contig_pages()
>> alloc_contig_range()
>> isolate_freepages_range()
>> split_map_pages()
>> post_alloc_hook() [FOR EVERY PAGE]
>> set_page_refcounted()
>> set_page_count(page, 1)
>> prep_compound_gigantic_page()
>> set_page_count(p, 0) [FOR EVERY TAIL PAGE]
>>
>> so all the tail pages are initially allocated with refcount 1 by the
>> page allocator, and then we overwrite those refcounts with zeroes.
>>
>>
>> Luckily, the only non-__init codepath that can get here is
>> __nr_hugepages_store_common(), which is only invoked from privileged
>> writes to sysfs/sysctls.
Thanks for spotting this Jann!
> Argh. What if we passed __GFP_COMP into alloc_contig_pages()?
> The current callers of alloc_contig_range() do not pass __GFP_COMP,
> so it's no behaviour change for them, and __GFP_COMP implies this
> kind of behaviour. I think that would imply _not_ calling
> split_map_pages(), which implies not calling post_alloc_hook(),
> which means we probably need to do a lot of the parts of
> post_alloc_hook() in alloc_gigantic_page(). Yuck.
That might work. We would need to do something 'like' split_map_pages
to split the compound free pages in the allocated range. Then, stitch
them together into one big compound page. We 'should' be able to call
post_alloc_hook on the resulting big compound page. Of course, that is
all theory without digging into the details.
Note that in the general case alloc_contig_range/alloc_contig_pages can
be called to request a non-power of two number of pages. In such cases
__GFP_COMP would make little sense.
--
Mike Kravetz
prev parent reply other threads:[~2021-06-15 18:27 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-15 11:03 page refcount race between prep_compound_gigantic_page() and __page_cache_add_speculative()? Jann Horn
2021-06-15 12:40 ` Matthew Wilcox
2021-06-15 18:27 ` Mike Kravetz [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=41f698d6-a099-105d-e922-170fbb3e1798@oracle.com \
--to=mike.kravetz@oracle.com \
--cc=aarcange@redhat.com \
--cc=jack@suse.cz \
--cc=jannh@google.com \
--cc=jhubbard@nvidia.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
--cc=youquan.song@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).