From: "Doug Smythies" <dsmythies@telus.net>
To: "'Christian Loehle'" <christian.loehle@arm.com>,
"'ALOK TIWARI'" <alok.a.tiwari@oracle.com>
Cc: <gregkh@linuxfoundation.org>, <sashal@kernel.org>,
<linux-kernel@vger.kernel.org>, <stable@vger.kernel.org>,
<linux-pm@vger.kernel.org>, <rafael.j.wysocki@intel.com>,
<daniel.lezcano@linaro.org>,
"Doug Smythies" <dsmythies@telus.net>
Subject: RE: [report] Performance regressions introduced via "cpuidle: menu: Remove iowait influence" on 6.12.y
Date: Mon, 29 Dec 2025 08:17:39 -0800 [thread overview]
Message-ID: <003101dc78de$a74ee2b0$f5eca810$@telus.net> (raw)
In-Reply-To: <2463b494-66dd-4f0b-9ce7-4f544a41ecbf@arm.com>
[-- Attachment #1: Type: text/plain, Size: 2908 bytes --]
On 2025.12.24 05:26 Christian Loehle wrote:
> On 12/24/25 12:18, ALOK TIWARI wrote:
>> On 12/3/2025 11:01 PM, ALOK TIWARI wrote:
>>> Hi,
>>>
>>> I’m reporting a performance regression of up to 6% sequential I/O
>>> vdbench regression observed on 6.12.y kernel.
>>> While running performance benchmarks on v6.12.60 kernel the sequential I/O vdbench metrics are showing a 5-6% performance regression when compared to v6.12.48
>>>
>>> Bisect root cause commit
>>> ========================
>>> - commit b39b62075ab4 ("cpuidle: menu: Remove iowait influence")
>>>
>>> Things work fine again when the previously removed performance- multiplier code is added back.
>>>
>>> Test details
>>> ============
>>> The system is connected to a number of disks in disk array using multipathing and directio configuration in the vdbench profile.
>>>
>>> wd=wd1,sd=sd*,rdpct=0,seekpct=sequential,xfersize=128k
>>> rd=128k64T,wd=wd1,iorate=max,elapsed=600,interval=1,warmup=300,threads=64
>>>
>>>
>>> Thanks,
>>> Alok
>>>
>>
>> Just a gentle ping in case this was missed.
>> Please let us know if we are missing anything or if there are additional things to consider.
>>
>
> Hi Alok,
> indeed it was missed, sorry!
> The cpuidle sysfs dumps pre and post test would be interesting like so:
> cat /sys/devices/system/cpu/cpu*/cpuidle/state*/*
> for both would be helpful so I can see what actually changed.
> Or a trace with cpu_idle events.
> Additionally a latency distribution of the IO requests would be helpful to relate
> them to the cpuidle wakeups.
>
> Thanks,
> Christian
Hi All,
For what its worth:
I have tried to recreate the issue with some of my high I/O wait type tests.
I was not successful.
The differences in idle state usage is clearly visible but has no noticeable effect on the performance numbers.
Workflow 1: My version of "critical-jobs", an attempt to do similar to the non-free SPECjbb (that Artem sometimes reports on).
Sweep from 4000 to 4600 jobs per second, and 10 disk lookups per job,
and "500" (arbitrary) units of work per lookup. 500 Gigabyte data file:
No consistent job latency differences were observed. There were some differences to experimental error in managing disk caching.
The attached graphs detail the biggest differences in idle stats.
Workflow 2: Just do a sequential read the 500 Gigabyte data file (cpu affinity forced):
No difference for the kernel 6.12 cases, but an improvement for kernel 6.19-rc2.
Graph attached (the sometimes teo poor read rates is unrelated to these idle investigations (or so I claim without proof)).
Processor: Intel(R) Core(TM) i5-10600K CPU @ 4.10GHz, 6 cores 12 CPUs.
HWP: Enabled.
Legend:
revert = kernel 6.12-rc1, 9852d85ec9d4
with = kernel 6.12-rc1 + 38f83090f515 (the patch of concern)
menu = kernel 6.19-rc2 menu gov
teo = kernel 6.19-rc2 teo gov
... Doug
[-- Attachment #2: 0_residency.png --]
[-- Type: image/png, Size: 48359 bytes --]
[-- Attachment #3: 2_above.png --]
[-- Type: image/png, Size: 86417 bytes --]
[-- Attachment #4: 2_usage.png --]
[-- Type: image/png, Size: 75270 bytes --]
[-- Attachment #5: read-rates.png --]
[-- Type: image/png, Size: 71242 bytes --]
prev parent reply other threads:[~2025-12-29 16:17 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-03 17:31 [report] Performance regressions introduced via "cpuidle: menu: Remove iowait influence" on 6.12.y ALOK TIWARI
2025-12-24 12:18 ` ALOK TIWARI
2025-12-24 13:26 ` Christian Loehle
2025-12-29 16:17 ` Doug Smythies [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='003101dc78de$a74ee2b0$f5eca810$@telus.net' \
--to=dsmythies@telus.net \
--cc=alok.a.tiwari@oracle.com \
--cc=christian.loehle@arm.com \
--cc=daniel.lezcano@linaro.org \
--cc=gregkh@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=rafael.j.wysocki@intel.com \
--cc=sashal@kernel.org \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).