From: Nikolay Borisov <kernel@kyup.com>
To: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Christoph Lameter <cl@linux.com>, Linux MM <linux-mm@kvack.org>,
brouer@redhat.com,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Subject: Re: Soft lockup in __slab_free (SLUB)
Date: Wed, 28 Sep 2016 10:15:01 +0300 [thread overview]
Message-ID: <57EB6DF5.2010503@kyup.com> (raw)
In-Reply-To: <20160928053114.GC22706@js1304-P5Q-DELUXE>
On 09/28/2016 08:31 AM, Joonsoo Kim wrote:
> Hello,
>
> Ccing Paul, because it looks like RCU problem.
>
> On Mon, Sep 26, 2016 at 10:46:56AM +0300, Nikolay Borisov wrote:
>> Hello,
>>
>> On 4.4.14 stable kernel I observed the following soft-lockup, however I
>> also checked that the code is the same in 4.8-rc so the problem is
>> present there as well:
>>
>> [434575.862377] NMI watchdog: BUG: soft lockup - CPU#13 stuck for 23s! [swapper/13:0]
>> [434575.866352] CPU: 13 PID: 0 Comm: swapper/13 Tainted: P O 4.4.14-clouder5 #2
>> [434575.866643] Hardware name: Supermicro X9DRD-iF/LF/X9DRD-iF, BIOS 3.0b 12/05/2013
>> [434575.866932] task: ffff8803714aadc0 ti: ffff8803714c4000 task.ti: ffff8803714c4000
>> [434575.867221] RIP: 0010:[<ffffffff81613f4c>] [<ffffffff81613f4c>] _raw_spin_unlock_irqrestore+0x1c/0x30
>> [434575.867566] RSP: 0018:ffff880373ce3dc0 EFLAGS: 00000203
>> [434575.867736] RAX: ffff88066e0c9a40 RBX: 0000000000000203 RCX: 0000000000000000
>> [434575.868023] RDX: 0000000000000008 RSI: 0000000000000203 RDI: ffff88066e0c9a40
>> [434575.868311] RBP: ffff880373ce3dc8 R08: ffff8803e5c1d118 R09: ffff8803e5c1d538
>> [434575.868609] R10: 0000000000000000 R11: ffffea000f970600 R12: ffff88066e0c9a40
>> [434575.868895] R13: ffffea000f970600 R14: 000000000046cf3b R15: ffff88036f8e3200
>> [434575.869183] FS: 0000000000000000(0000) GS:ffff880373ce0000(0000) knlGS:0000000000000000
>> [434575.869472] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [434575.869643] CR2: ffffffffff600400 CR3: 0000000367201000 CR4: 00000000001406e0
>> [434575.869931] Stack:
>> [434575.870095] ffff88066e0c9a40 ffff880373ce3e78 ffffffff8117ea8a ffff880373ce3e08
>> [434575.870567] 000000000046bd03 0000000100170017 ffff8803e5c1d118 ffff8803e5c1d118
>> [434575.871037] 00ff000100000000 0000000000000203 0000000000000000 ffffffff8123d9ac
>> [434575.874253] Call Trace:
>> [434575.874418] <IRQ>
>> [434575.874473] [<ffffffff8117ea8a>] __slab_free+0xca/0x290
>> [434575.874806] [<ffffffff8123d9ac>] ? ext4_i_callback+0x1c/0x20
>> [434575.874978] [<ffffffff8117ee3a>] kmem_cache_free+0x1ea/0x200
>> [434575.875149] [<ffffffff8123d9ac>] ext4_i_callback+0x1c/0x20
>> [434575.875325] [<ffffffff810ad09b>] rcu_process_callbacks+0x21b/0x620
>> [434575.875506] [<ffffffff81057337>] __do_softirq+0x147/0x310
>> [434575.875680] [<ffffffff8105764f>] irq_exit+0x5f/0x70
>> [434575.875851] [<ffffffff81616a82>] smp_apic_timer_interrupt+0x42/0x50
>> [434575.876025] [<ffffffff816151e9>] apic_timer_interrupt+0x89/0x90
>> [434575.876197] <EOI>
>> [434575.876250] [<ffffffff81510601>] ? cpuidle_enter_state+0x141/0x2c0
>> [434575.876583] [<ffffffff815105f6>] ? cpuidle_enter_state+0x136/0x2c0
>> [434575.876755] [<ffffffff815107b7>] cpuidle_enter+0x17/0x20
>> [434575.876929] [<ffffffff810949fc>] cpu_startup_entry+0x2fc/0x360
>> [434575.877105] [<ffffffff810330e3>] start_secondary+0xf3/0x100
>>
>> The ip in __slab_free points to this piece of code (in mm/slub.c):
>>
>> if (unlikely(n)) {
>> spin_unlock_irqrestore(&n->list_lock, flags);
>> n = NULL;
>> }
>>
>> I think it's a pure chance that the spin_unlock_restore is being shown in this trace,
>> do you think that a cond_resched is needed in this unlikely if clause? Apparently there
>> are cases where this loop can take a considerable amount of time.
>
> I think that __slab_free() doesn't take too long time even if there is
> lock contention. And, cond_resched() is valid on softirq context?
Now that I think of it - it's not valid since it might sleep and softirq
is atomic context. So my suggestion is actually invalid, too bad.
>
> I think that problem would be caused by too many rcu callback is
> executed without scheduling. Paul?
I don't think it's an RCU problem per-se since ext4_i_callback is being
called from RCU due to the way inodes are being freed.
On a slightly different note - on a different physical server, running
zfsonlinux I experienced a very similar issue but without being in an
RCU context, here what the stacktrace looked there:
Call Trace:
[<ffffffff8117ea8a>] __slab_free+0xca/0x290
[<ffffffff8117ee3a>] ? kmem_cache_free+0x1ea/0x200
[<ffffffff8117ee3a>] kmem_cache_free+0x1ea/0x200
[<ffffffffa0245c97>] spl_kmem_cache_free+0x147/0x1d0 [spl]
[<ffffffffa02cbecc>] dnode_destroy+0x1dc/0x230 [zfs]
[<ffffffffa02cbf64>] dnode_buf_pageout+0x44/0xc0 [zfs]
[<ffffffffa0248301>] taskq_thread+0x291/0x4e0 [spl]
[<ffffffff8107ccd0>] ? wake_up_q+0x70/0x70
[<ffffffffa0248070>] ? taskq_thread_spawn+0x50/0x50 [spl]
[<ffffffff810715bf>] kthread+0xef/0x110
[<ffffffff810714d0>] ? kthread_park+0x60/0x60
[<ffffffff8161483f>] ret_from_fork+0x3f/0x70
[<ffffffff810714d0>] ? kthread_park+0x60/0x60
It's hard to believe this is a coincidence. I've inspected the callpaths
in the taskq_thread/dnode_buf_pageout/dnode_destroy and there doesn't
seem to be a loop apart from the one in __slab_free.
[SNIP]
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-09-28 7:15 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-09-26 7:46 Soft lockup in __slab_free (SLUB) Nikolay Borisov
2016-09-28 5:31 ` Joonsoo Kim
2016-09-28 7:15 ` Nikolay Borisov [this message]
2016-09-28 11:12 ` Paul E. McKenney
2016-09-29 1:40 ` Joonsoo Kim
2016-09-29 2:11 ` Paul E. McKenney
2016-09-29 2:30 ` Nicholas Piggin
2016-09-29 2:47 ` Paul E. McKenney
2016-09-29 3:13 ` Nicholas Piggin
2016-09-29 10:30 ` Paul E. McKenney
2016-09-29 2:55 ` Joonsoo Kim
2016-09-29 7:11 ` Nikolay Borisov
2016-09-29 10:27 ` Paul E. McKenney
2016-09-29 10:50 ` Nikolay Borisov
2016-09-29 11:10 ` Paul E. McKenney
2016-10-04 14:58 ` Nikolay Borisov
2016-10-04 15:36 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=57EB6DF5.2010503@kyup.com \
--to=kernel@kyup.com \
--cc=brouer@redhat.com \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-mm@kvack.org \
--cc=paulmck@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).