public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Jan Kara <jack@suse.cz>
Cc: Yunzhao Li <yunzhao@cloudflare.com>,
	linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	linux-fsdevel@vger.kernel.org,
	Jesper Brouer <jesper@cloudflare.com>,
	Suren Baghdasaryan <surenb@google.com>
Subject: Re: balance_dirty_pages() causes 40% IO PSI (full) with no drain benefit on 384 GB machine
Date: Fri, 20 Mar 2026 10:38:01 -0400	[thread overview]
Message-ID: <ab1bySbmeidrQfoN@cmpxchg.org> (raw)
In-Reply-To: <z3quejtbpkiizgmvu2pjhqv6ioyach2wszen5slkau2hossj7p@rk2ftlwbdfw5>

Hello,

On Thu, Mar 19, 2026 at 12:58:51PM +0100, Jan Kara wrote:
> On Tue 17-03-26 15:53:51, Yunzhao Li wrote:
> > 3. The freerun ceiling gates entry into the proportional
> >    throttle path. Even moderate sleeping shows up as IO PSI
> >    (io_schedule_timeout is accounted as IO stall). Dirty
> >    never hits the hard limit in our case. It sits in the
> >    proportional zone, but cumulative PSI from many tasks
> >    sleeping short durations is already 26-40% (full). Should
> >    the throttle path be skipped when sleeping cannot help
> >    drain?
> 
> Perhaps bumping PSI when dirty throttling kicks in is not ideal measure
> (because it doesn't necessarily mean the storage itself is maxed out,
> besides flush worker not being able to saturate the storage there can be
> also various block layer controllers arbitrarily throttling background
> writeback) but again I'll let PSI guys to chime in here with their
> opinions.

What PSI is supposed to capture is lost productive time. IOW, it's not
about device utilization, it's about whether tasks are being held
up. That's the information it intends to collect, and it sounds like
it's doing its job.

If you get 40% IO full pressure, that means during 40% of wallclock
time, the only / all runnable task(s) are waiting on the IO resource.

It *is* influenced by suboptimal device utilization, but that's not an
accident. That's a real cost experienced by the workload, after all.


      reply	other threads:[~2026-03-20 14:38 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-17 22:53 balance_dirty_pages() causes 40% IO PSI (full) with no drain benefit on 384 GB machine Yunzhao Li
2026-03-19 11:58 ` Jan Kara
2026-03-20 14:38   ` Johannes Weiner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ab1bySbmeidrQfoN@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=jack@suse.cz \
    --cc=jesper@cloudflare.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=surenb@google.com \
    --cc=yunzhao@cloudflare.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox