public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: tglx@linutronix.de, peterz@infradead.org,
	xfs <linux-xfs@vger.kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH] xfs: fix per-cpu CIL structure aggregation racing with dying cpus
Date: Mon, 21 Aug 2023 15:17:52 +1000	[thread overview]
Message-ID: <ZOLzgBOuyWHapOyZ@dread.disaster.area> (raw)
In-Reply-To: <20230804223854.GL11352@frogsfrogsfrogs>

On Fri, Aug 04, 2023 at 03:38:54PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <djwong@kernel.org>
> 
> In commit 7c8ade2121200 ("xfs: implement percpu cil space used
> calculation"), the XFS committed (log) item list code was converted to
> use per-cpu lists and space tracking to reduce cpu contention when
> multiple threads are modifying different parts of the filesystem and
> hence end up contending on the log structures during transaction commit.
> Each CPU tracks its own commit items and space usage, and these do not
> have to be merged into the main CIL until either someone wants to push
> the CIL items, or we run over a soft threshold and switch to slower (but
> more accurate) accounting with atomics.
> 
> Unfortunately, the for_each_cpu iteration suffers from the same race
> with cpu dying problem that was identified in commit 8b57b11cca88f
> ("pcpcntrs: fix dying cpu summation race") -- CPUs are removed from
> cpu_online_mask before the CPUHP_XFS_DEAD callback gets called.  As a
> result, both CIL percpu structure aggregation functions fail to collect
> the items and accounted space usage at the correct point in time.
> 
> If we're lucky, the items that are collected from the online cpus exceed
> the space given to those cpus, and the log immediately shuts down in
> xlog_cil_insert_items due to the (apparent) log reservation overrun.
> This happens periodically with generic/650, which exercises cpu hotplug
> vs. the filesystem code.
> 
> Applying the same sort of fix from 8b57b11cca88f to the CIL code seems
> to make the generic/650 problem go away, but I've been told that tglx
> was not happy when he saw:
> 
> "...the only thing we actually need to care about is that
> percpu_counter_sum() iterates dying CPUs. That's trivial to do, and when
> there are no CPUs dying, it has no addition overhead except for a
> cpumask_or() operation."
> 
> I have no idea what the /correct/ solution is, but I've been holding on
> to this patch for 3 months.  In that time, 8b57b11cca88 hasn't been
> reverted and cpu_dying_mask hasn't been removed, so I'm sending this and
> we'll see what happens.
> 
> So, how /do/ we perform periodic aggregation of per-cpu data safely?
> Move the xlog_cil_pcp_dead call to the dying section, where at least the
> other cpus will all be stopped?  And have it dump its items into any
> online cpu's data structure?

I suspect that we have to stop using for_each_*cpu() and hotplug
notifiers altogether for this code.

That is, we simply create our own bitmap for nr_possible_cpus in the
CIL checkpoint context we allocate for each checkpoint (i.e. the
struct xfs_cil_ctx). When we store something on that CPU for that
CIL context, we set the corresponding bit for that CPU. Then when we
are aggregating the checkpoint, we simply walk all the cpus with the
"has items" bit set and grab everything.

This gets rid of the need for hotplug notifiers completely - we just
don't care if the CPU is online or offline when we sweep the CIL
context for items at push time - if the bit is set then there's
stuff to sweep...

-Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2023-08-21  5:18 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-04 22:38 [RFC PATCH] xfs: fix per-cpu CIL structure aggregation racing with dying cpus Darrick J. Wong
2023-08-21  5:17 ` Dave Chinner [this message]
2023-08-22 18:30   ` Darrick J. Wong
2023-08-22 23:28     ` Dave Chinner
2023-08-23  1:20       ` Darrick J. Wong
2023-08-23  1:44         ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZOLzgBOuyWHapOyZ@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=djwong@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox