* balance_dirty_pages() causes 40% IO PSI (full) with no drain benefit on 384 GB machine
@ 2026-03-17 22:53 Yunzhao Li
2026-03-19 11:58 ` Jan Kara
0 siblings, 1 reply; 3+ messages in thread
From: Yunzhao Li @ 2026-03-17 22:53 UTC (permalink / raw)
To: linux-mm; +Cc: Andrew Morton, Jan Kara, linux-fsdevel, Jesper Brouer
[-- Attachment #1: Type: text/plain, Size: 3524 bytes --]
Hello,
On a 384 GB machine with NVMe storage (2x NVMe RAID0, dm-crypt,
XFS, kernel 6.12, AMD EPYC 9684X 96-Core), balance_dirty_pages()
throttles writers via io_schedule_timeout(), causing 26-40% IO PSI(full).
But the throttling doesn't actually drain dirty pages faster.
The flusher only submits ~578 MB/s of writeback regardless of
whether writers are throttled, and the NVMe device has ample
spare capacity (1,044 MB/s benchmarked).
I'd like to understand whether this is expected and what the
right approach is.
The setup
---------
dirty_background_ratio=10, dirty_ratio=20 (defaults)
dirtyable memory: ~77 GB
-> bg_thresh: 10% * 77 GB = 7.7 GB
-> freerun ceiling: (20%+10%)/2 * 77 GB = 11.7 GB
-> limit (hard): 20% * 77 GB = 15.5 GB
Write generation: ~580 MB/s (HTTP cache miss writes)
Flusher drain rate: ~578 MB/s (device can do 1044 MB/s
flusher can't feed it fast enough)
Below freerun, balance_dirty_pages() returns immediately.
Between freerun and limit, pos_ratio ramps from 2.0 down to 0
via cubic polynomial that tasks sleep proportionally in
io_schedule_timeout(). At limit, pos_ratio=0 and all writers
block (max 200ms sleep).
Generation ≈ drain, so dirty settles at 10-14 GB — crossing
the freerun ceiling into the proportional throttle zone.
The observation
---------------
throughput IO PSI full
dirty 5-10 GB: 494 MB/s 1.4%
dirty >10 GB: 578 MB/s 26.2%
(dirty still accumulating at +2 MB/s)
Peak IO PSI full: 39.5%.
The proportional throttle adds 26% IO PSI (full) but dirty
still grows. The flusher is already at its submission ceiling
and sleeping writers doesn't help it submit I/O faster. The
device is actually starved: writeback-in-flight drops from
6-8 MB (baseline) to 1.8 MB (during throttle), and NVMe QD
drops from 45 to 37. The device could drain more if fed
more, but the flusher can't feed it faster.
Meanwhile, memory is not scarce:
Dirty: 16 GB
Clean file LRU: 57 GB (instantly reclaimable)
Memory PSI: 1-2%
The dirty pages aren't causing memory pressure. 57 GB of clean
pages remain available for instant reclaim. The throttle is
protecting a resource that isn't scarce, at a cost of 40% IO
PSI (full).
Our workaround plan: dirty_background_ratio=5, dirty_ratio=40.
This raises freerun to ~17.5 GB, keeping dirty in freerun.
The flusher drains identically. It runs to bg_thresh either
way.
Questions
---------
1. When should balance_dirty_pages() sleep writers? Currently
the criterion is "dirty > fraction of dirtyable memory."
This doesn't consider whether sleeping actually helps
drain dirty faster, or whether the remaining clean pages
are sufficient. Should the decision factor in flusher/
device saturation or available reclaimable memory?
2. Is tuning dirty_ratio to 30-40% the expected approach for
high-memory (>256 GB) systems? Documentation doesn't
cover this.
3. The freerun ceiling gates entry into the proportional
throttle path. Even moderate sleeping shows up as IO PSI
(io_schedule_timeout is accounted as IO stall). Dirty
never hits the hard limit in our case. It sits in the
proportional zone, but cumulative PSI from many tasks
sleeping short durations is already 26-40% (full). Should
the throttle path be skipped when sleeping cannot help
drain?
Thanks,
Yunzhao
[-- Attachment #2: Type: text/html, Size: 3854 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: balance_dirty_pages() causes 40% IO PSI (full) with no drain benefit on 384 GB machine
2026-03-17 22:53 balance_dirty_pages() causes 40% IO PSI (full) with no drain benefit on 384 GB machine Yunzhao Li
@ 2026-03-19 11:58 ` Jan Kara
2026-03-20 14:38 ` Johannes Weiner
0 siblings, 1 reply; 3+ messages in thread
From: Jan Kara @ 2026-03-19 11:58 UTC (permalink / raw)
To: Yunzhao Li
Cc: linux-mm, Andrew Morton, Jan Kara, linux-fsdevel, Jesper Brouer,
Johannes Weiner, Suren Baghdasaryan
Hello,
[looks more like a question about IO PSI behavior so CCing relevant people]
On Tue 17-03-26 15:53:51, Yunzhao Li wrote:
> On a 384 GB machine with NVMe storage (2x NVMe RAID0, dm-crypt,
> XFS, kernel 6.12, AMD EPYC 9684X 96-Core), balance_dirty_pages()
> throttles writers via io_schedule_timeout(), causing 26-40% IO PSI(full).
> But the throttling doesn't actually drain dirty pages faster.
> The flusher only submits ~578 MB/s of writeback regardless of
> whether writers are throttled, and the NVMe device has ample
> spare capacity (1,044 MB/s benchmarked).
>
> I'd like to understand whether this is expected and what the
> right approach is.
>
> The setup
> ---------
>
> dirty_background_ratio=10, dirty_ratio=20 (defaults)
> dirtyable memory: ~77 GB
> -> bg_thresh: 10% * 77 GB = 7.7 GB
> -> freerun ceiling: (20%+10%)/2 * 77 GB = 11.7 GB
> -> limit (hard): 20% * 77 GB = 15.5 GB
>
> Write generation: ~580 MB/s (HTTP cache miss writes)
> Flusher drain rate: ~578 MB/s (device can do 1044 MB/s
> flusher can't feed it fast enough)
>
> Below freerun, balance_dirty_pages() returns immediately.
> Between freerun and limit, pos_ratio ramps from 2.0 down to 0
> via cubic polynomial that tasks sleep proportionally in
> io_schedule_timeout(). At limit, pos_ratio=0 and all writers
> block (max 200ms sleep).
>
> Generation ≈ drain, so dirty settles at 10-14 GB — crossing
> the freerun ceiling into the proportional throttle zone.
>
> The observation
> ---------------
>
> throughput IO PSI full
> dirty 5-10 GB: 494 MB/s 1.4%
> dirty >10 GB: 578 MB/s 26.2%
> (dirty still accumulating at +2 MB/s)
>
> Peak IO PSI full: 39.5%.
>
> The proportional throttle adds 26% IO PSI (full) but dirty
> still grows. The flusher is already at its submission ceiling
> and sleeping writers doesn't help it submit I/O faster.
The throttling of the writers works as it should - the point is to not
allow writers to dirty more memory than is the configured limit and as your
data shows that is indeed what the code successfully does. The point of
throttling is *not* to speed up writeback or anything like that what you
describe above. You are correct that page writeback using single flush
worker isn't able to saturate relatively fast storage - that is a
limitation of current writeback subsystem and is something that is being
worked on (by allowing more parallel writeback workers) but it isn't IMHO
substantial for this report.
I cannot really comment whether IO PSI of 40% is or is not appropriate for
this situation - I'm deferring that to PSI guys I've CCed.
> The device is actually starved: writeback-in-flight drops from 6-8 MB
> (baseline) to 1.8 MB (during throttle), and NVMe QD drops from 45 to 37.
> The device could drain more if fed more, but the flusher can't feed it
> faster.
>
> Meanwhile, memory is not scarce:
>
> Dirty: 16 GB
> Clean file LRU: 57 GB (instantly reclaimable)
> Memory PSI: 1-2%
>
> The dirty pages aren't causing memory pressure. 57 GB of clean
> pages remain available for instant reclaim. The throttle is
> protecting a resource that isn't scarce, at a cost of 40% IO
> PSI (full).
The configuration is that no more than 20% of you page cache can be dirty.
The throttling code just makes sure this is the case. If you configure
higher limit, that's what throttling code will enforce. However note that
higher amount of dirty memory is unlikely to increase writeback speed -
that is likely bottlenecked on CPU overhead of submitting writeback IOs.
Also note that the limit is set to 20% because dirty memory is not possible
to reclaim fast (you need to write it back first) and so in case of memory
pressure the machine can easily trash for quite some time if the dirty
limits are too high.
> Our workaround plan: dirty_background_ratio=5, dirty_ratio=40.
> This raises freerun to ~17.5 GB, keeping dirty in freerun.
> The flusher drains identically. It runs to bg_thresh either
> way.
>
> Questions
> ---------
>
> 1. When should balance_dirty_pages() sleep writers? Currently
> the criterion is "dirty > fraction of dirtyable memory."
> This doesn't consider whether sleeping actually helps
> drain dirty faster, or whether the remaining clean pages
> are sufficient. Should the decision factor in flusher/
> device saturation or available reclaimable memory?
I think I've answered this above.
> 2. Is tuning dirty_ratio to 30-40% the expected approach for
> high-memory (>256 GB) systems? Documentation doesn't
> cover this.
No, it is actually recommened to set it lower because more dirty memory
doesn't usually help IO throughput beyond certain amount. But it all
depends very much on your workload and it's dirtying pattern (how much of
the page cache gets frequently redirtied).
> 3. The freerun ceiling gates entry into the proportional
> throttle path. Even moderate sleeping shows up as IO PSI
> (io_schedule_timeout is accounted as IO stall). Dirty
> never hits the hard limit in our case. It sits in the
> proportional zone, but cumulative PSI from many tasks
> sleeping short durations is already 26-40% (full). Should
> the throttle path be skipped when sleeping cannot help
> drain?
Perhaps bumping PSI when dirty throttling kicks in is not ideal measure
(because it doesn't necessarily mean the storage itself is maxed out,
besides flush worker not being able to saturate the storage there can be
also various block layer controllers arbitrarily throttling background
writeback) but again I'll let PSI guys to chime in here with their
opinions.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: balance_dirty_pages() causes 40% IO PSI (full) with no drain benefit on 384 GB machine
2026-03-19 11:58 ` Jan Kara
@ 2026-03-20 14:38 ` Johannes Weiner
0 siblings, 0 replies; 3+ messages in thread
From: Johannes Weiner @ 2026-03-20 14:38 UTC (permalink / raw)
To: Jan Kara
Cc: Yunzhao Li, linux-mm, Andrew Morton, linux-fsdevel, Jesper Brouer,
Suren Baghdasaryan
Hello,
On Thu, Mar 19, 2026 at 12:58:51PM +0100, Jan Kara wrote:
> On Tue 17-03-26 15:53:51, Yunzhao Li wrote:
> > 3. The freerun ceiling gates entry into the proportional
> > throttle path. Even moderate sleeping shows up as IO PSI
> > (io_schedule_timeout is accounted as IO stall). Dirty
> > never hits the hard limit in our case. It sits in the
> > proportional zone, but cumulative PSI from many tasks
> > sleeping short durations is already 26-40% (full). Should
> > the throttle path be skipped when sleeping cannot help
> > drain?
>
> Perhaps bumping PSI when dirty throttling kicks in is not ideal measure
> (because it doesn't necessarily mean the storage itself is maxed out,
> besides flush worker not being able to saturate the storage there can be
> also various block layer controllers arbitrarily throttling background
> writeback) but again I'll let PSI guys to chime in here with their
> opinions.
What PSI is supposed to capture is lost productive time. IOW, it's not
about device utilization, it's about whether tasks are being held
up. That's the information it intends to collect, and it sounds like
it's doing its job.
If you get 40% IO full pressure, that means during 40% of wallclock
time, the only / all runnable task(s) are waiting on the IO resource.
It *is* influenced by suboptimal device utilization, but that's not an
accident. That's a real cost experienced by the workload, after all.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-03-20 14:38 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-17 22:53 balance_dirty_pages() causes 40% IO PSI (full) with no drain benefit on 384 GB machine Yunzhao Li
2026-03-19 11:58 ` Jan Kara
2026-03-20 14:38 ` Johannes Weiner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox