public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Alex Lyakas <alex@zadara.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: RCU stall in xfs_reclaim_inodes_ag
Date: Tue, 17 Nov 2020 08:30:05 +1100	[thread overview]
Message-ID: <20201116213005.GM7391@dread.disaster.area> (raw)
In-Reply-To: <5582F682900B483C89460123ABE79292@alyakaslap>

On Mon, Nov 16, 2020 at 07:45:46PM +0200, Alex Lyakas wrote:
> Greetings XFS community,
> 
> We had an RCU stall [1]. According to the code, it happened in
> radix_tree_gang_lookup_tag():
> 
> rcu_read_lock();
> nr_found = radix_tree_gang_lookup_tag(
>        &pag->pag_ici_root,
>        (void **)batch, first_index,
>        XFS_LOOKUP_BATCH,
>        XFS_ICI_RECLAIM_TAG);
> 
> 
> This XFS system has over 100M files. So perhaps looping inside the radix
> tree took too long, and it was happening in RCU read-side critical seciton.
> This is one of the possible causes for RCU stall.

Doubt it. According to the trace it was stalled for 60s, and a
radix tree walk of 100M entries only takes a second or two.

Further, unless you are using inode32, the inodes will be spread
across multiple radix trees and that makes the radix trees much
smaller and even less likely to take this long to run a traversal.

This could be made a little more efficient by adding a "last index"
parameter to tell the search where to stop (i.e. if the batch count
has not yet been reached), but in general that makes little
difference to the search because the radix tree walk finds the next
inodes in a few pointer chases...

> This happened in kernel 4.14.99, but looking at latest mainline code, code
> is still the same.

These inode radix trees have been used in XFS since 2008, and this
is the first time anyone has reported a stall like this, so I'm
doubtful that there is actually a general bug. My suspicion for such
a rare occurrence would be memory corruption of some kind or a
leaked atomic/rcu state in some other code on that CPU....

> Can anyone please advise how to address that? It is not possible to put
> cond_resched() inside the radix tree code, because it can be used with
> spinlocks, and perhaps other contexts where sleeping is not allowed.

I don't think there is a solution to this problem - it just
shouldn't happen in when everything is operating normally as it's
just a tag search on an indexed tree.

Hence even if there was a hack to stop stall warnings, it won't fix
whatever problem is leading to the rcu stall. The system will then
just spin burning CPU, and eventually something else will fail.

IOWs, unless you can reproduce this stall and find out what is wrong
in the radix tree that is leading to it looping forever, there's
likely nothing we can do to avoid this.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2020-11-16 21:30 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-16 17:45 RCU stall in xfs_reclaim_inodes_ag Alex Lyakas
2020-11-16 21:30 ` Dave Chinner [this message]
2020-12-07 10:18   ` Alex Lyakas
2020-12-07 14:15     ` Brian Foster
2020-12-08  8:07     ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201116213005.GM7391@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=alex@zadara.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox