From: Yin Fengwei <fengwei.yin@intel.com>
To: David Hildenbrand <david@redhat.com>, Mateusz Guzik <mjguzik@gmail.com>
Cc: kernel test robot <oliver.sang@intel.com>,
Peter Xu <peterx@redhat.com>, <oe-lkp@lists.linux.dev>,
<lkp@intel.com>, <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Huacai Chen <chenhuacai@kernel.org>,
Jason Gunthorpe <jgg@nvidia.com>,
Matthew Wilcox <willy@infradead.org>,
Nathan Chancellor <nathan@kernel.org>,
Ryan Roberts <ryan.roberts@arm.com>,
WANG Xuerui <kernel@xen0n.name>, <linux-mm@kvack.org>,
<ying.huang@intel.com>, <feng.tang@intel.com>
Subject: Re: [linus:master] [mm] c0bff412e6: stress-ng.clone.ops_per_sec -2.9% regression
Date: Mon, 12 Aug 2024 12:43:08 +0800 [thread overview]
Message-ID: <c7e0d029-0a64-4b27-bd62-cf9a3577d7ff@intel.com> (raw)
In-Reply-To: <5c0979a2-9a56-4284-82d2-42da62bda4a5@redhat.com>
Hi David,
On 8/1/24 09:44, David Hildenbrand wrote:
> On 01.08.24 15:37, Mateusz Guzik wrote:
>> On Thu, Aug 1, 2024 at 3:34 PM David Hildenbrand <david@redhat.com>
>> wrote:
>>>
>>> On 01.08.24 15:30, Mateusz Guzik wrote:
>>>> On Thu, Aug 01, 2024 at 08:49:27AM +0200, David Hildenbrand wrote:
>>>>> Yes indeed. fork() can be extremely sensitive to each added
>>>>> instruction.
>>>>>
>>>>> I even pointed out to Peter why I didn't add the PageHuge check in
>>>>> there
>>>>> originally [1].
>>>>>
>>>>> "Well, and I didn't want to have runtime-hugetlb checks in
>>>>> PageAnonExclusive code called on certainly-not-hugetlb code paths."
>>>>>
>>>>>
>>>>> We now have to do a page_folio(page) and then test for hugetlb.
>>>>>
>>>>> return folio_test_hugetlb(page_folio(page));
>>>>>
>>>>> Nowadays, folio_test_hugetlb() will be faster than at c0bff412e6
>>>>> times, so
>>>>> maybe at least part of the overhead is gone.
>>>>>
>>>>
>>>> I'll note page_folio expands to a call to _compound_head.
>>>>
>>>> While _compound_head is declared as an inline, it ends up being big
>>>> enough that the compiler decides to emit a real function instead and
>>>> real func calls are not particularly cheap.
>>>>
>>>> I had a brief look with a profiler myself and for single-threaded usage
>>>> the func is quite high up there, while it manages to get out with the
>>>> first branch -- that is to say there is definitely performance lost for
>>>> having a func call instead of an inlined branch.
>>>>
>>>> The routine is deinlined because of a call to page_fixed_fake_head,
>>>> which itself is annotated with always_inline.
>>>>
>>>> This is of course patchable with minor shoveling.
>>>>
>>>> I did not go for it because stress-ng results were too unstable for me
>>>> to confidently state win/loss.
>>>>
>>>> But should you want to whack the regression, this is what I would look
>>>> into.
>>>>
>>>
>>> This might improve it, at least for small folios I guess:
Do you want us to test this change? Or you have further optimization
ongoing? Thanks.
Regards
Yin, Fengwei
>>>
>>> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
>>> index 5769fe6e4950..7796ae116018 100644
>>> --- a/include/linux/page-flags.h
>>> +++ b/include/linux/page-flags.h
>>> @@ -1086,7 +1086,7 @@ PAGE_TYPE_OPS(Zsmalloc, zsmalloc, zsmalloc)
>>> */
>>> static inline bool PageHuge(const struct page *page)
>>> {
>>> - return folio_test_hugetlb(page_folio(page));
>>> + return PageCompound(page) &&
>>> folio_test_hugetlb(page_folio(page));
>>> }
>>>
>>> /*
>>>
>>>
>>> We would avoid the function call for small folios.
>>>
>>
>> why not massage _compound_head back to an inlineable form instead? for
>> all i know you may even register a small win in total
>
> Agreed, likely it will increase code size a bit which is why the
> compiler decides to not inline. We could force it with __always_inline.
>
> Finding ways to shrink page_fixed_fake_head() might be even better.
>
next prev parent reply other threads:[~2024-08-12 4:43 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-30 5:00 [linus:master] [mm] c0bff412e6: stress-ng.clone.ops_per_sec -2.9% regression kernel test robot
2024-07-30 8:11 ` David Hildenbrand
2024-08-01 6:39 ` Yin, Fengwei
2024-08-01 6:49 ` David Hildenbrand
2024-08-01 7:44 ` Yin, Fengwei
2024-08-01 7:54 ` David Hildenbrand
2024-08-01 13:30 ` Mateusz Guzik
2024-08-01 13:34 ` David Hildenbrand
2024-08-01 13:37 ` Mateusz Guzik
2024-08-01 13:44 ` David Hildenbrand
2024-08-12 4:43 ` Yin Fengwei [this message]
2024-08-12 4:49 ` Mateusz Guzik
2024-08-12 8:12 ` David Hildenbrand
2024-08-12 8:18 ` Mateusz Guzik
2024-08-12 8:23 ` David Hildenbrand
2024-08-13 7:09 ` Yin Fengwei
2024-08-13 7:14 ` Mateusz Guzik
2024-08-14 3:02 ` Yin Fengwei
2024-08-14 4:10 ` Mateusz Guzik
2024-08-14 9:45 ` David Hildenbrand
2024-08-14 11:06 ` Mateusz Guzik
2024-08-14 12:02 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c7e0d029-0a64-4b27-bd62-cf9a3577d7ff@intel.com \
--to=fengwei.yin@intel.com \
--cc=akpm@linux-foundation.org \
--cc=chenhuacai@kernel.org \
--cc=david@redhat.com \
--cc=feng.tang@intel.com \
--cc=jgg@nvidia.com \
--cc=kernel@xen0n.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lkp@intel.com \
--cc=mjguzik@gmail.com \
--cc=nathan@kernel.org \
--cc=oe-lkp@lists.linux.dev \
--cc=oliver.sang@intel.com \
--cc=peterx@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).