From: Matthew Wilcox <willy@infradead.org>
To: Dave Chinner <david@fromorbit.com>
Cc: Zhaoyang Huang <huangzhaoyang@gmail.com>,
"zhaoyang.huang" <zhaoyang.huang@unisoc.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
ke.wang@unisoc.com, steve.kang@unisoc.com,
baocong.liu@unisoc.com, linux-fsdevel@vger.kernel.org
Subject: Re: [RFC PATCH] mm: move xa forward when run across zombie page
Date: Thu, 20 Oct 2022 22:52:14 +0100 [thread overview]
Message-ID: <Y1HDDu3UV0L3cDwE@casper.infradead.org> (raw)
In-Reply-To: <20221019220424.GO2703033@dread.disaster.area>
On Thu, Oct 20, 2022 at 09:04:24AM +1100, Dave Chinner wrote:
> On Wed, Oct 19, 2022 at 04:23:10PM +0100, Matthew Wilcox wrote:
> > On Wed, Oct 19, 2022 at 09:30:42AM +1100, Dave Chinner wrote:
> > > This is reading and writing the same amount of file data at the
> > > application level, but once the data has been written and kicked out
> > > of the page cache it seems to require an awful lot more read IO to
> > > get it back to the application. i.e. this looks like mmap() is
> > > readahead thrashing severely, and eventually it livelocks with this
> > > sort of report:
> > >
> > > [175901.982484] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> > > [175901.985095] rcu: Tasks blocked on level-1 rcu_node (CPUs 0-15): P25728
> > > [175901.987996] (detected by 0, t=97399871 jiffies, g=15891025, q=1972622 ncpus=32)
> > > [175901.991698] task:test_write state:R running task stack:12784 pid:25728 ppid: 25696 flags:0x00004002
> > > [175901.995614] Call Trace:
> > > [175901.996090] <TASK>
> > > [175901.996594] ? __schedule+0x301/0xa30
> > > [175901.997411] ? sysvec_apic_timer_interrupt+0xb/0x90
> > > [175901.998513] ? sysvec_apic_timer_interrupt+0xb/0x90
> > > [175901.999578] ? asm_sysvec_apic_timer_interrupt+0x16/0x20
> > > [175902.000714] ? xas_start+0x53/0xc0
> > > [175902.001484] ? xas_load+0x24/0xa0
> > > [175902.002208] ? xas_load+0x5/0xa0
> > > [175902.002878] ? __filemap_get_folio+0x87/0x340
> > > [175902.003823] ? filemap_fault+0x139/0x8d0
> > > [175902.004693] ? __do_fault+0x31/0x1d0
> > > [175902.005372] ? __handle_mm_fault+0xda9/0x17d0
> > > [175902.006213] ? handle_mm_fault+0xd0/0x2a0
> > > [175902.006998] ? exc_page_fault+0x1d9/0x810
> > > [175902.007789] ? asm_exc_page_fault+0x22/0x30
> > > [175902.008613] </TASK>
> > >
> > > Given that filemap_fault on XFS is probably trying to map large
> > > folios, I do wonder if this is a result of some kind of race with
> > > teardown of a large folio...
> >
> > It doesn't matter whether we're trying to map a large folio; it
> > matters whether a large folio was previously created in the cache.
> > Through the magic of readahead, it may well have been. I suspect
> > it's not teardown of a large folio, but splitting. Removing a
> > page from the page cache stores to the pointer in the XArray
> > first (either NULL or a shadow entry), then decrements the refcount.
> >
> > We must be observing a frozen folio. There are a number of places
> > in the MM which freeze a folio, but the obvious one is splitting.
> > That looks like this:
> >
> > local_irq_disable();
> > if (mapping) {
> > xas_lock(&xas);
> > (...)
> > if (folio_ref_freeze(folio, 1 + extra_pins)) {
>
> But the lookup is not doing anything to prevent the split on the
> frozen page from making progress, right? It's not holding any folio
> references, and it's not holding the mapping tree lock, either. So
> how does the lookup in progress prevent the page split from making
> progress?
My thinking was that it keeps hammering the ->refcount field in
struct folio. That might prevent a thread on a different socket
from making forward progress. In contrast, spinlocks are designed
to be fair under contention, so by spinning on an actual lock, we'd
remove contention on the folio.
But I think the tests you've done refute that theory. I'm all out of
ideas at the moment. Either we have a frozen folio from somebody who
doesn't hold the lock, or we have someone who's left a frozen folio in
the page cache. I'm leaning towards that explanation at the moment,
but I don't have a good suggestion for debugging.
Perhaps a bad suggestion for debugging would be to call dump_page()
with a __ratelimit() wrapper to not be overwhelmed with information?
> I would have thought:
>
> if (!folio_try_get_rcu(folio)) {
> rcu_read_unlock();
> cond_resched();
> rcu_read_lock();
> goto repeat;
> }
>
> Would be the right way to yeild the CPU to avoid priority
> inversion related livelocks here...
I'm not sure we're allowed to schedule here. We might be under another
spinlock?
next prev parent reply other threads:[~2022-10-20 21:52 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-14 5:30 [RFC PATCH] mm: move xa forward when run across zombie page zhaoyang.huang
2022-10-14 12:11 ` Matthew Wilcox
2022-10-17 5:34 ` Zhaoyang Huang
2022-10-17 6:58 ` Zhaoyang Huang
2022-10-17 15:55 ` Matthew Wilcox
2022-10-18 2:52 ` Zhaoyang Huang
2022-10-18 3:09 ` Matthew Wilcox
2022-10-18 22:30 ` Dave Chinner
2022-10-19 1:16 ` Dave Chinner
2022-10-19 4:47 ` Dave Chinner
2022-10-19 5:48 ` Zhaoyang Huang
2022-10-19 13:06 ` Matthew Wilcox
2022-10-20 1:27 ` Zhaoyang Huang
2022-10-26 19:49 ` Matthew Wilcox
2022-10-27 1:57 ` Zhaoyang Huang
2022-10-19 11:49 ` Brian Foster
2022-10-20 2:04 ` Dave Chinner
2022-10-20 3:12 ` Zhaoyang Huang
2022-10-19 15:23 ` Matthew Wilcox
2022-10-19 22:04 ` Dave Chinner
2022-10-19 22:46 ` Dave Chinner
2022-10-19 23:42 ` Dave Chinner
2022-10-20 21:52 ` Matthew Wilcox [this message]
2022-10-26 8:38 ` Zhaoyang Huang
2022-10-26 14:38 ` Matthew Wilcox
2022-10-26 16:01 ` Matthew Wilcox
2022-10-28 4:05 ` Dave Chinner
2022-11-01 7:17 ` Dave Chinner
2024-04-11 7:04 ` Zhaoyang Huang
-- strict thread matches above, loose matches on Subject: below --
2022-10-21 21:37 Pulavarty, Badari
2022-10-21 22:31 ` Matthew Wilcox
2022-10-21 22:40 ` Pulavarty, Badari
2022-10-31 19:25 ` Pulavarty, Badari
2022-10-31 19:39 ` Hugh Dickins
2022-10-31 21:33 ` Pulavarty, Badari
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y1HDDu3UV0L3cDwE@casper.infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=baocong.liu@unisoc.com \
--cc=david@fromorbit.com \
--cc=huangzhaoyang@gmail.com \
--cc=ke.wang@unisoc.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=steve.kang@unisoc.com \
--cc=zhaoyang.huang@unisoc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).