linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "David Hildenbrand (Red Hat)" <david@kernel.org>
To: Sid Kumar <sidhartha.kumar@oracle.com>,
	kernel test robot <oliver.sang@intel.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: oe-lkp@lists.linux.dev, lkp@intel.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	Barry Song <baohua@kernel.org>, Dev Jain <dev.jain@arm.com>,
	Lance Yang <lance.yang@linux.dev>,
	Liam Howlett <liam.howlett@oracle.com>,
	Ryan Roberts <ryan.roberts@arm.com>, Zi Yan <ziy@nvidia.com>,
	linux-mm@kvack.org
Subject: Re: [linux-next:master] [mm] f66e2727dd: stress-ng.rawsock.ops_per_sec 46.9% regression
Date: Mon, 1 Dec 2025 22:13:17 +0100	[thread overview]
Message-ID: <e031bce7-13d4-4404-a714-0d0fc87dc96a@kernel.org> (raw)
In-Reply-To: <e32f61a9-0461-444d-b008-bb09a3d85510@oracle.com>

On 12/1/25 21:56, Sid Kumar wrote:
> 
> On 11/26/25 3:49 AM, David Hildenbrand (Red Hat) wrote:
>> On 11/25/25 15:46, kernel test robot wrote:
>>>
>>>
>>> Hello,
>>>
>>> kernel test robot noticed a 46.9% regression of
>>> stress-ng.rawsock.ops_per_sec on:
>>>
>>>
>>> commit: f66e2727ddfcbbe3dbb459e809824f721a914464 ("mm: huge_memory:
>>> use folio_can_map_prot_numa() for pmd folio")
>>> https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
>>
>> Unexpected, but maybe simple a symptom of doing the right thing?
>>
>> "which skips unsuitable folio, i.e.  zone device, shared folios (KSM,
>> CoW), non-movable dma pinned, dirty file folio and folios that already
>> have the expected node affinity."
>>
>> I suspect skipping shared folios or dirty file folios might make the
>> difference. The benchmark results would be misleading in that case: as
>> we shouldn't have migrated these pages in the first place beforehand.
> 
> 
> Reproducing the benchmark and adding prints to show which condition the
> return false occurs in shows that:
> 
>       /* Also skip shared copy-on-write folios */
>       if (is_cow_mapping(vma->vm_flags) &&
> folio_maybe_mapped_shared(folio)) {
>           printk("false at is_Cow_mapping\n");
>           return false;
>       }
> 
> virtme-ng% dmesg | grep is_Cow_mapping | wc -l
> 25302
> 
> is the condition that now fails and leads to the regression.

Okay, as I thought, it's rather a "doing the right thing". At
least doing the same thing we do during PTE faults :)

This check dates back to:

commit 859d4adc3415a64ccb8b0c50dc4e3a888dcb5805
Author: Henry Willard <henry.willard@oracle.com>
Date:   Wed Jan 31 16:21:07 2018 -0800

     mm: numa: do not trap faults on shared data section pages.
     
     Workloads consisting of a large number of processes running the same
     program with a very large shared data segment may experience performance
     problems when numa balancing attempts to migrate the shared cow pages.
     This manifests itself with many processes or tasks in
     TASK_UNINTERRUPTIBLE state waiting for the shared pages to be migrated.
     
     The program listed below simulates the conditions with these results
     when run with 288 processes on a 144 core/8 socket machine.
     
     Average throughput      Average throughput     Average throughput
     with numa_balancing=0   with numa_balancing=1  with numa_balancing=1
                             without the patch      with the patch
     ---------------------   ---------------------  ---------------------
     2118782                 2021534                2107979
     
     Complex production environments show less variability and fewer poorly
     performing outliers accompanied with a smaller number of processes
     waiting on NUMA page migration with this patch applied.  In some cases,
     %iowait drops from 16%-26% to 0.
     


I think the reproducer would actually not care about anonymous folios, but
not sure if that would make a difference for the benchmark here.

-- 
Cheers

David


      reply	other threads:[~2025-12-01 21:13 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-25 14:46 [linux-next:master] [mm] f66e2727dd: stress-ng.rawsock.ops_per_sec 46.9% regression kernel test robot
2025-11-26  9:49 ` David Hildenbrand (Red Hat)
2025-12-01 20:56   ` Sid Kumar
2025-12-01 21:13     ` David Hildenbrand (Red Hat) [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e031bce7-13d4-4404-a714-0d0fc87dc96a@kernel.org \
    --to=david@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=dev.jain@arm.com \
    --cc=lance.yang@linux.dev \
    --cc=liam.howlett@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=lkp@intel.com \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=oe-lkp@lists.linux.dev \
    --cc=oliver.sang@intel.com \
    --cc=ryan.roberts@arm.com \
    --cc=sidhartha.kumar@oracle.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).