From: Mike Galbraith <bitbucket@online.de>
To: Steven Rostedt <rostedt@goodmis.org>
Cc: RT <linux-rt-users@vger.kernel.org>,
Thomas Gleixner <tglx@linutronix.de>
Subject: Re: 3.[68]-rt: CONFIG_PROVE_LOCKING + CONFIG_DEBUG_FORCE_WEAK_PER_CPU = boot time swap_lock deadlock
Date: Tue, 23 Apr 2013 05:08:41 +0200 [thread overview]
Message-ID: <1366686521.4731.12.camel@marge.simpson.net> (raw)
In-Reply-To: <1366685096.9609.145.camel@gandalf.local.home>
On Mon, 2013-04-22 at 22:44 -0400, Steven Rostedt wrote:
> On Sun, 2013-04-14 at 13:07 +0200, Mike Galbraith wrote:
> > Greetings,
> >
> > Turn off CONFIG_DEBUG_FORCE_WEAK_PER_CPU, all is well, with it enabled,
> > I get a boot time deadlock on swap_lock when the box tries to load
> > initramfs, seemingly because with CONFIG_DEBUG_FORCE_WEAK_PER_CPU,
> > percpu locallocks are not zeroed, so only initializing the spinlock
> > isn't enough. With lockdep enabled, I see warning on owner and nestcnt,
> > followed by init being permanently stuck.
> >
> > Do the below, it'll boot and run, but lockdep will eventually gripe
> > about MAX_LOCKDEP_ENTRIES, MAX_STACK_TRACE_ENTRIES, or adding a
> > non-static key, and box explodes violently shortly thereafter on
> > softlock or memory corruption.. so below wasn't exactly a great idea :)
> >
> > 3.4-rt boots and runs just fine with the same config. Turn off
> > CONFIG_DEBUG_FORCE_WEAK_PER_CPU, and these kernels boot and run fine
> > with lockdep, though I do still need to double entries/bits for it to
> > not shut itself off. Anyway, seems CONFIG_DEBUG_FORCE_WEAK_PER_CPU
> > became a very bad idea. Probably always was, no idea how that ended up
> > in my config.
> >
>
> When I built with CONFIG_DEBUG_FORCE_WEAK_PER_CPU it had issues with the
> swap lock. Can you try this patch? What you showed looks different, but
> did that happen with the updates you made?
Yeah, swap_lock was the killer here. The data was from virgin source.
Thanks, I'll try this out ASAP.
> -- Steve
>
> diff --git a/mm/swap.c b/mm/swap.c
> index 63f42b8..fab8f97 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -42,7 +42,7 @@ static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
> static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
>
> static DEFINE_LOCAL_IRQ_LOCK(rotate_lock);
> -static DEFINE_LOCAL_IRQ_LOCK(swap_lock);
> +static DEFINE_LOCAL_IRQ_LOCK(swapvar_lock);
>
> /*
> * This path almost never happens for VM activity - pages are normally
> @@ -407,13 +407,13 @@ static void activate_page_drain(int cpu)
> void activate_page(struct page *page)
> {
> if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
> - struct pagevec *pvec = &get_locked_var(swap_lock,
> + struct pagevec *pvec = &get_locked_var(swapvar_lock,
> activate_page_pvecs);
>
> page_cache_get(page);
> if (!pagevec_add(pvec, page))
> pagevec_lru_move_fn(pvec, __activate_page, NULL);
> - put_locked_var(swap_lock, activate_page_pvecs);
> + put_locked_var(swapvar_lock, activate_page_pvecs);
> }
> }
>
> @@ -461,13 +461,13 @@ EXPORT_SYMBOL(mark_page_accessed);
> */
> void __lru_cache_add(struct page *page, enum lru_list lru)
> {
> - struct pagevec *pvec = &get_locked_var(swap_lock, lru_add_pvecs)[lru];
> + struct pagevec *pvec = &get_locked_var(swapvar_lock, lru_add_pvecs)[lru];
>
> page_cache_get(page);
> if (!pagevec_space(pvec))
> __pagevec_lru_add(pvec, lru);
> pagevec_add(pvec, page);
> - put_locked_var(swap_lock, lru_add_pvecs);
> + put_locked_var(swapvar_lock, lru_add_pvecs);
> }
> EXPORT_SYMBOL(__lru_cache_add);
>
> @@ -632,19 +632,19 @@ void deactivate_page(struct page *page)
> return;
>
> if (likely(get_page_unless_zero(page))) {
> - struct pagevec *pvec = &get_locked_var(swap_lock,
> + struct pagevec *pvec = &get_locked_var(swapvar_lock,
> lru_deactivate_pvecs);
>
> if (!pagevec_add(pvec, page))
> pagevec_lru_move_fn(pvec, lru_deactivate_fn, NULL);
> - put_locked_var(swap_lock, lru_deactivate_pvecs);
> + put_locked_var(swapvar_lock, lru_deactivate_pvecs);
> }
> }
>
> void lru_add_drain(void)
> {
> - lru_add_drain_cpu(local_lock_cpu(swap_lock));
> - local_unlock_cpu(swap_lock);
> + lru_add_drain_cpu(local_lock_cpu(swapvar_lock));
> + local_unlock_cpu(swapvar_lock);
> }
>
> static void lru_add_drain_per_cpu(struct work_struct *dummy)
>
>
next prev parent reply other threads:[~2013-04-23 3:08 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-04-14 11:07 3.[68]-rt: CONFIG_PROVE_LOCKING + CONFIG_DEBUG_FORCE_WEAK_PER_CPU = boot time swap_lock deadlock Mike Galbraith
2013-04-23 2:44 ` Steven Rostedt
2013-04-23 2:47 ` Steven Rostedt
2013-04-23 3:13 ` Mike Galbraith
2013-04-23 16:00 ` Steven Rostedt
2013-04-23 19:06 ` Mike Galbraith
2013-04-24 5:47 ` Mike Galbraith
2013-04-23 3:08 ` Mike Galbraith [this message]
2013-04-26 9:16 ` Sebastian Andrzej Siewior
2013-04-26 12:13 ` Steven Rostedt
2013-04-26 13:38 ` Sebastian Andrzej Siewior
2013-04-26 13:52 ` Steven Rostedt
2013-04-26 14:04 ` Sebastian Andrzej Siewior
2013-04-26 14:15 ` Steven Rostedt
2013-04-26 14:49 ` Sebastian Andrzej Siewior
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1366686521.4731.12.camel@marge.simpson.net \
--to=bitbucket@online.de \
--cc=linux-rt-users@vger.kernel.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox