public inbox for oe-lkp@lists.linux.dev
 help / color / mirror / Atom feed
From: kernel test robot <oliver.sang@intel.com>
To: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: <oe-lkp@lists.linux.dev>, <lkp@intel.com>,
	<linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Al Viro <viro@zeniv.linux.org.uk>,
	Andreas Larsson <andreas@gaisler.com>,
	"Andrey Konovalov" <andreyknvl@gmail.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	Baoquan He <bhe@redhat.com>,
	"Chatre, Reinette" <reinette.chatre@intel.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Christian Brauner <brauner@kernel.org>,
	"Dan Williams" <dan.j.williams@intel.com>,
	Dave Jiang <dave.jiang@intel.com>,
	"Dave Martin" <dave.martin@arm.com>,
	Dave Young <dyoung@redhat.com>,
	"David Hildenbrand" <david@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	Dmitriy Vyukov <dvyukov@google.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Guo Ren <guoren@kernel.org>, Heiko Carstens <hca@linux.ibm.com>,
	Hugh Dickins <hughd@google.com>,
	James Morse <james.morse@arm.com>, Jan Kara <jack@suse.cz>,
	Jann Horn <jannh@google.com>, Jonathan Corbet <corbet@lwn.net>,
	Kevin Tian <kevin.tian@intel.com>,
	Konstantin Komarov <almaz.alexandrovich@paragon-software.com>,
	Liam Howlett <liam.howlett@oracle.com>,
	"Luck, Tony" <tony.luck@intel.com>,
	Matthew Wilcox <willy@infradead.org>,
	Michal Hocko <mhocko@suse.com>, Mike Rapoport <rppt@kernel.org>,
	Muchun Song <muchun.song@linux.dev>,
	Nicolas Pitre <nico@fluxnic.net>,
	Oscar Salvador <osalvador@suse.de>,
	Pedro Falcato <pfalcato@suse.de>,
	Robin Murohy <robin.murphy@arm.com>,
	Sumanth Korikkar <sumanthk@linux.ibm.com>,
	Suren Baghdasaryan <surenb@google.com>,
	"Sven Schnelle" <svens@linux.ibm.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	"Uladzislau Rezki (Sony)" <urezki@gmail.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Vishal Verma <vishal.l.verma@intel.com>,
	Vivek Goyal <vgoyal@redhat.com>, Vlastimil Babka <vbabka@suse.cz>,
	"Will Deacon" <will@kernel.org>, <oliver.sang@intel.com>
Subject: [linus:master] [mm]  ab04945f91:  stress-ng.fd-abuse.ops_per_sec 3.4% improvement
Date: Fri, 19 Dec 2025 14:21:49 +0800	[thread overview]
Message-ID: <202512181616.16b76cde-lkp@intel.com> (raw)



Hello,

kernel test robot noticed a 3.4% improvement of stress-ng.fd-abuse.ops_per_sec on:


commit: ab04945f91bcad1668af57bbb575771e794aea8d ("mm: update mem char driver to use mmap_prepare")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master


testcase: stress-ng
config: x86_64-rhel-9.4
compiler: gcc-14
test machine: 256 threads 4 sockets INTEL(R) XEON(R) PLATINUM 8592+ (Emerald Rapids) with 256G memory
parameters:

	nr_threads: 100%
	testtime: 60s
	test: fd-abuse
	cpufreq_governor: performance



Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20251218/202512181616.16b76cde-lkp@intel.com

=========================================================================================
compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
  gcc-14/performance/x86_64-rhel-9.4/100%/debian-13-x86_64-20250902.cgz/lkp-emr-2sp1/fd-abuse/stress-ng/60s

commit: 
  89646d9c74 ("mm: add shmem_zero_setup_desc()")
  ab04945f91 ("mm: update mem char driver to use mmap_prepare")

89646d9c748c0902 ab04945f91bcad1668af57bbb57 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
     19.28            +2.8%      19.82        turbostat.RAMWatt
   2438603 ±  3%     -13.5%    2109471 ±  7%  numa-numastat.node1.local_node
   2517328 ±  4%     -11.9%    2217162 ±  5%  numa-numastat.node1.numa_hit
     72858 ±  6%     +91.5%     139519 ±  3%  proc-vmstat.nr_slab_reclaimable
    339884          +107.0%     703398        proc-vmstat.nr_slab_unreclaimable
  20261925 ±  2%     -25.3%   15142663        proc-vmstat.pgalloc_normal
  18968240 ±  2%     -31.6%   12978914 ±  2%  proc-vmstat.pgfree
 3.706e+09            +3.4%  3.833e+09        stress-ng.fd-abuse.ops
  61777966            +3.4%   63895149        stress-ng.fd-abuse.ops_per_sec
    285.34            +4.4%     297.95        stress-ng.time.user_time
     16701            +2.7%      17144        stress-ng.time.voluntary_context_switches
   8804549           +61.4%   14208934        meminfo.Committed_AS
    291574 ±  6%     +91.1%     557114 ±  3%  meminfo.KReclaimable
  12991379 ±  3%     +16.6%   15149678 ±  2%  meminfo.Memused
    291574 ±  6%     +91.1%     557114 ±  3%  meminfo.SReclaimable
   1353499          +107.7%    2811130        meminfo.SUnreclaim
   1645073          +104.7%    3368244        meminfo.Slab
  13335448 ±  3%     +15.9%   15450436 ±  2%  meminfo.max_used_kB
     17516 ± 34%     +97.4%      34572 ±  6%  numa-vmstat.node0.nr_slab_reclaimable
     88462 ±  6%    +102.5%     179171 ±  2%  numa-vmstat.node0.nr_slab_unreclaimable
     66.79 ± 54%    +106.9%     138.19 ± 33%  numa-vmstat.node1.nr_anon_transparent_hugepages
     16982 ± 18%     +94.3%      32997 ± 19%  numa-vmstat.node1.nr_slab_reclaimable
     81860 ±  3%    +113.2%     174536 ±  2%  numa-vmstat.node1.nr_slab_unreclaimable
   2521117 ±  4%     -12.0%    2218931 ±  5%  numa-vmstat.node1.numa_hit
   2442392 ±  3%     -13.6%    2111240 ±  7%  numa-vmstat.node1.numa_local
     19931 ± 28%     +90.9%      38056 ±  7%  numa-vmstat.node2.nr_slab_reclaimable
     86088 ±  5%     +99.7%     171897        numa-vmstat.node2.nr_slab_unreclaimable
     18146 ± 12%     +82.8%      33181 ± 13%  numa-vmstat.node3.nr_slab_reclaimable
     83060 ±  4%    +111.5%     175660 ±  2%  numa-vmstat.node3.nr_slab_unreclaimable
 2.679e+10            +1.5%  2.718e+10        perf-stat.i.branch-instructions
     43.06            +0.9       44.01        perf-stat.i.cache-miss-rate%
  88883420            +5.0%   93333404        perf-stat.i.cache-misses
      6.01            -1.2%       5.94        perf-stat.i.cpi
      8654            -5.3%       8192        perf-stat.i.cycles-between-cache-misses
 1.211e+11            +1.6%   1.23e+11        perf-stat.i.instructions
      0.73            +3.5%       0.76        perf-stat.overall.MPKI
     42.85            +0.9       43.75        perf-stat.overall.cache-miss-rate%
      6.03            -1.3%       5.95        perf-stat.overall.cpi
      8266            -4.7%       7879        perf-stat.overall.cycles-between-cache-misses
      0.17            +1.3%       0.17        perf-stat.overall.ipc
 2.635e+10            +1.4%  2.673e+10        perf-stat.ps.branch-instructions
  86986582            +5.1%   91435274        perf-stat.ps.cache-misses
  2.03e+08            +2.9%   2.09e+08        perf-stat.ps.cache-references
 1.191e+11            +1.6%   1.21e+11        perf-stat.ps.instructions
 7.248e+12            +1.6%  7.363e+12        perf-stat.total.instructions
     70689 ± 34%     +96.6%     138955 ±  6%  numa-meminfo.node0.KReclaimable
     70689 ± 34%     +96.6%     138955 ±  6%  numa-meminfo.node0.SReclaimable
    356946 ±  6%    +101.3%     718428 ±  3%  numa-meminfo.node0.SUnreclaim
    427635 ± 10%    +100.5%     857383 ±  2%  numa-meminfo.node0.Slab
    139096 ± 54%    +104.6%     284523 ± 33%  numa-meminfo.node1.AnonHugePages
     68643 ± 18%     +93.3%     132663 ± 19%  numa-meminfo.node1.KReclaimable
   2076019 ±  6%     +57.0%    3258377 ± 43%  numa-meminfo.node1.MemUsed
     68643 ± 18%     +93.3%     132663 ± 19%  numa-meminfo.node1.SReclaimable
    330581 ±  3%    +111.7%     699874 ±  2%  numa-meminfo.node1.SUnreclaim
    399224 ±  3%    +108.5%     832538 ±  4%  numa-meminfo.node1.Slab
     80254 ± 27%     +90.6%     152941 ±  7%  numa-meminfo.node2.KReclaimable
     80254 ± 27%     +90.6%     152941 ±  7%  numa-meminfo.node2.SReclaimable
    347619 ±  4%     +98.2%     689115        numa-meminfo.node2.SUnreclaim
    427873 ±  7%     +96.8%     842056        numa-meminfo.node2.Slab
     73141 ± 11%     +82.0%     133137 ± 13%  numa-meminfo.node3.KReclaimable
   3108062 ± 11%     +41.5%    4398558 ± 26%  numa-meminfo.node3.MemUsed
     73141 ± 11%     +82.0%     133137 ± 13%  numa-meminfo.node3.SReclaimable
    335184 ±  5%    +110.2%     704492 ±  2%  numa-meminfo.node3.SUnreclaim
    408326 ±  5%    +105.1%     837630 ±  3%  numa-meminfo.node3.Slab
      5.23            -1.9        3.31        perf-profile.calltrace.cycles-pp.stress_fd_lseek
     46.32            -1.3       45.05        perf-profile.calltrace.cycles-pp.inode_sb_list_add.new_inode.__shmem_get_inode.__shmem_file_setup.shmem_zero_setup
     46.38            -1.3       45.12        perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
     46.32            -1.3       45.06        perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_user_mode_loop.do_syscall_64
     46.34            -1.3       45.08        perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
     46.44            -1.3       45.18        perf-profile.calltrace.cycles-pp.new_inode.__shmem_get_inode.__shmem_file_setup.shmem_zero_setup.__mmap_new_vma
     46.54            -1.3       45.28        perf-profile.calltrace.cycles-pp.__shmem_get_inode.__shmem_file_setup.shmem_zero_setup.__mmap_new_vma.__mmap_region
     46.30            -1.3       45.04        perf-profile.calltrace.cycles-pp.__dentry_kill.dput.__fput.task_work_run.exit_to_user_mode_loop
     46.14            -1.3       44.88        perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dput.__fput.task_work_run
     45.84            -1.2       44.60        perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.evict.__dentry_kill.dput
     45.99            -1.2       44.74        perf-profile.calltrace.cycles-pp._raw_spin_lock.evict.__dentry_kill.dput.__fput
      0.67            +0.0        0.70        perf-profile.calltrace.cycles-pp.llseek
      2.40            +0.0        2.44        perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
      2.38            +0.0        2.43        perf-profile.calltrace.cycles-pp.__mmap_region.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
      2.54            +0.1        2.60        perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
      0.75 ±  2%      +0.1        0.81        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
      2.58            +0.1        2.64        perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
      0.77 ±  2%      +0.1        0.83        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
      2.56            +0.1        2.62        perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
      1.02            +0.1        1.09        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.stress_fd_lseek
      1.06            +0.1        1.14        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.stress_fd_lseek
     44.33            +0.8       45.15        perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
     46.28            +0.8       47.11        perf-profile.calltrace.cycles-pp._raw_spin_lock.inode_sb_list_add.new_inode.__shmem_get_inode.__shmem_file_setup
     46.20            +0.8       47.04        perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.inode_sb_list_add.new_inode.__shmem_get_inode
     44.63            +0.8       45.47        perf-profile.calltrace.cycles-pp.shmem_zero_setup.__mmap_new_vma.__mmap_region.do_mmap.vm_mmap_pgoff
     44.62            +0.8       45.47        perf-profile.calltrace.cycles-pp.__shmem_file_setup.shmem_zero_setup.__mmap_new_vma.__mmap_region.do_mmap
     44.66            +0.8       45.51        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
     44.66            +0.8       45.51        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
     44.69            +0.8       45.53        perf-profile.calltrace.cycles-pp.__munmap
     44.78            +0.9       45.63        perf-profile.calltrace.cycles-pp.__mmap_new_vma.__mmap_region.do_mmap.vm_mmap_pgoff.do_syscall_64
     44.98            +0.9       45.84        perf-profile.calltrace.cycles-pp.__mmap_region.do_mmap.vm_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
     45.11            +0.9       45.97        perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
     45.09            +0.9       45.96        perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
     47.74            +0.9       48.66        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
     47.74            +0.9       48.67        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__mmap
     47.80            +0.9       48.73        perf-profile.calltrace.cycles-pp.__mmap
      0.00            +2.1        2.11        perf-profile.calltrace.cycles-pp.inode_sb_list_add.new_inode.__shmem_get_inode.__shmem_file_setup.shmem_zero_setup_desc
      0.00            +2.1        2.12        perf-profile.calltrace.cycles-pp.new_inode.__shmem_get_inode.__shmem_file_setup.shmem_zero_setup_desc.__mmap_region
      0.00            +2.1        2.12        perf-profile.calltrace.cycles-pp.__shmem_get_inode.__shmem_file_setup.shmem_zero_setup_desc.__mmap_region.mmap_region
      0.00            +2.1        2.13        perf-profile.calltrace.cycles-pp.__shmem_file_setup.shmem_zero_setup_desc.__mmap_region.mmap_region.do_mmap
      0.00            +2.1        2.13        perf-profile.calltrace.cycles-pp.shmem_zero_setup_desc.__mmap_region.mmap_region.do_mmap.vm_mmap_pgoff
      5.24            -1.9        3.32        perf-profile.children.cycles-pp.stress_fd_lseek
     46.39            -1.3       45.13        perf-profile.children.cycles-pp.task_work_run
     46.73            -1.3       45.47        perf-profile.children.cycles-pp.shmem_zero_setup
     46.30            -1.3       45.04        perf-profile.children.cycles-pp.__dentry_kill
     46.48            -1.3       45.22        perf-profile.children.cycles-pp.exit_to_user_mode_loop
     46.36            -1.3       45.11        perf-profile.children.cycles-pp.dput
     46.14            -1.3       44.88        perf-profile.children.cycles-pp.evict
     46.44            -1.2       45.20        perf-profile.children.cycles-pp.__fput
     47.02            -1.2       45.78        perf-profile.children.cycles-pp.__mmap_new_vma
     46.96            -1.2       45.72        perf-profile.children.cycles-pp.__munmap
     92.11            -0.4       91.71        perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
     92.39            -0.4       91.99        perf-profile.children.cycles-pp._raw_spin_lock
     97.69            -0.1       97.58        perf-profile.children.cycles-pp.do_syscall_64
     97.82            -0.1       97.71        perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
      0.08            -0.0        0.07        perf-profile.children.cycles-pp.__destroy_inode
      0.11            +0.0        0.12        perf-profile.children.cycles-pp.kthread
      0.11            +0.0        0.12        perf-profile.children.cycles-pp.ret_from_fork
      0.11            +0.0        0.12        perf-profile.children.cycles-pp.ret_from_fork_asm
      0.13            +0.0        0.14        perf-profile.children.cycles-pp.__fstat64
      0.17            +0.0        0.18        perf-profile.children.cycles-pp.__x64_sys_pselect6
      0.16            +0.0        0.17        perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
      0.08            +0.0        0.09        perf-profile.children.cycles-pp.generic_perform_write
      0.13            +0.0        0.14        perf-profile.children.cycles-pp.inet_listen
      0.13            +0.0        0.14        perf-profile.children.cycles-pp.mas_store_gfp
      0.08            +0.0        0.09        perf-profile.children.cycles-pp.mas_store_prealloc
      0.11 ±  3%      +0.0        0.12        perf-profile.children.cycles-pp.generic_file_write_iter
      0.14 ±  2%      +0.0        0.15        perf-profile.children.cycles-pp.mas_wr_node_store
      0.15            +0.0        0.16 ±  2%  perf-profile.children.cycles-pp.perf_event_mmap_event
      0.38            +0.0        0.39        perf-profile.children.cycles-pp.arch_exit_to_user_mode_prepare
      0.21            +0.0        0.22        perf-profile.children.cycles-pp.__x64_sys_fcntl
      0.08 ±  5%      +0.0        0.10 ±  4%  perf-profile.children.cycles-pp.do_iter_readv_writev
      0.10            +0.0        0.12 ±  4%  perf-profile.children.cycles-pp.run_ksoftirqd
      0.21            +0.0        0.23 ±  2%  perf-profile.children.cycles-pp.kmem_cache_alloc_noprof
      0.10            +0.0        0.12 ±  4%  perf-profile.children.cycles-pp.smpboot_thread_fn
      0.13            +0.0        0.15 ±  3%  perf-profile.children.cycles-pp.handle_softirqs
      0.12 ±  3%      +0.0        0.14 ±  2%  perf-profile.children.cycles-pp.rcu_core
      0.12            +0.0        0.14 ±  3%  perf-profile.children.cycles-pp.alloc_inode
      0.12            +0.0        0.14 ±  3%  perf-profile.children.cycles-pp.rcu_do_batch
      0.67            +0.0        0.69        perf-profile.children.cycles-pp.entry_SYSCALL_64
      0.46            +0.0        0.48        perf-profile.children.cycles-pp.do_vmi_munmap
      0.44            +0.0        0.46        perf-profile.children.cycles-pp.do_vmi_align_munmap
      0.48            +0.0        0.51        perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
      0.47            +0.0        0.50        perf-profile.children.cycles-pp.__vm_munmap
      0.47            +0.0        0.50        perf-profile.children.cycles-pp.__x64_sys_munmap
      0.05            +0.0        0.08        perf-profile.children.cycles-pp.inode_init_always_gfp
      0.31 ±  4%      +0.0        0.34 ±  4%  perf-profile.children.cycles-pp.do_filp_open
      0.31 ±  5%      +0.0        0.34 ±  4%  perf-profile.children.cycles-pp.path_openat
      0.37 ±  4%      +0.0        0.40 ±  3%  perf-profile.children.cycles-pp.__x64_sys_openat
      0.37 ±  4%      +0.0        0.40 ±  3%  perf-profile.children.cycles-pp.do_sys_openat2
      0.85            +0.0        0.89        perf-profile.children.cycles-pp.llseek
      2.40            +0.0        2.44        perf-profile.children.cycles-pp.mmap_region
      2.59            +0.1        2.64        perf-profile.children.cycles-pp.ksys_mmap_pgoff
     46.32            +0.8       47.16        perf-profile.children.cycles-pp.inode_sb_list_add
     46.44            +0.9       47.30        perf-profile.children.cycles-pp.new_inode
     46.54            +0.9       47.40        perf-profile.children.cycles-pp.__shmem_get_inode
     46.73            +0.9       47.60        perf-profile.children.cycles-pp.__shmem_file_setup
     47.37            +0.9       48.28        perf-profile.children.cycles-pp.__mmap_region
     47.64            +0.9       48.56        perf-profile.children.cycles-pp.do_mmap
     47.68            +0.9       48.60        perf-profile.children.cycles-pp.vm_mmap_pgoff
     47.83            +0.9       48.76        perf-profile.children.cycles-pp.__mmap
      0.00            +2.1        2.13        perf-profile.children.cycles-pp.shmem_zero_setup_desc
     91.71            -0.4       91.31        perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
      0.22            +0.0        0.23        perf-profile.self.cycles-pp.entry_SYSCALL_64
      0.30            +0.0        0.31        perf-profile.self.cycles-pp.do_syscall_64
      0.35            +0.0        0.36        perf-profile.self.cycles-pp.arch_exit_to_user_mode_prepare
      0.47            +0.0        0.50        perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


                 reply	other threads:[~2025-12-19  6:22 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202512181616.16b76cde-lkp@intel.com \
    --to=oliver.sang@intel.com \
    --cc=agordeev@linux.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=almaz.alexandrovich@paragon-software.com \
    --cc=andreas@gaisler.com \
    --cc=andreyknvl@gmail.com \
    --cc=arnd@arndb.de \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bhe@redhat.com \
    --cc=borntraeger@linux.ibm.com \
    --cc=brauner@kernel.org \
    --cc=corbet@lwn.net \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=dave.martin@arm.com \
    --cc=davem@davemloft.net \
    --cc=david@redhat.com \
    --cc=dvyukov@google.com \
    --cc=dyoung@redhat.com \
    --cc=gor@linux.ibm.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=guoren@kernel.org \
    --cc=hca@linux.ibm.com \
    --cc=hughd@google.com \
    --cc=jack@suse.cz \
    --cc=james.morse@arm.com \
    --cc=jannh@google.com \
    --cc=jgg@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=nico@fluxnic.net \
    --cc=oe-lkp@lists.linux.dev \
    --cc=osalvador@suse.de \
    --cc=pfalcato@suse.de \
    --cc=reinette.chatre@intel.com \
    --cc=robin.murphy@arm.com \
    --cc=rppt@kernel.org \
    --cc=sumanthk@linux.ibm.com \
    --cc=surenb@google.com \
    --cc=svens@linux.ibm.com \
    --cc=tony.luck@intel.com \
    --cc=tsbogend@alpha.franken.de \
    --cc=urezki@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=vgoyal@redhat.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=vishal.l.verma@intel.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox