From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751377AbdISC1H (ORCPT ); Mon, 18 Sep 2017 22:27:07 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:49009 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750714AbdISC1F (ORCPT ); Mon, 18 Sep 2017 22:27:05 -0400 Date: Mon, 18 Sep 2017 19:26:58 -0700 From: "Paul E. McKenney" To: Steven Rostedt Cc: Neeraj Upadhyay , josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, linux-kernel@vger.kernel.org, sramana@codeaurora.org, prsood@codeaurora.org, pkondeti@codeaurora.org, markivx@codeaurora.org, peterz@infradead.org, byungchul.park@lge.com Subject: Re: Query regarding synchronize_sched_expedited and resched_cpu Reply-To: paulmck@linux.vnet.ibm.com References: <20170917010015.GW3521@linux.vnet.ibm.com> <20170918111105.15f687da@gandalf.local.home> <20170918160125.GL3521@linux.vnet.ibm.com> <20170918121213.312c82b0@gandalf.local.home> <20170918162412.GM3521@linux.vnet.ibm.com> <20170918122931.0e3341f3@gandalf.local.home> <20170918165527.GN3521@linux.vnet.ibm.com> <20170918235311.GA20177@linux.vnet.ibm.com> <20170918212301.417c8b36@gandalf.local.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170918212301.417c8b36@gandalf.local.home> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17091902-2213-0000-0000-0000021D8D25 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007759; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000230; SDB=6.00919063; UDB=6.00461729; IPR=6.00699290; BA=6.00005595; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00017202; XFM=3.00000015; UTC=2017-09-19 02:27:03 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17091902-2214-0000-0000-000057975C23 Message-Id: <20170919022658.GZ3521@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-09-19_01:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1709190034 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 18, 2017 at 09:23:01PM -0400, Steven Rostedt wrote: > On Mon, 18 Sep 2017 16:53:11 -0700 > "Paul E. McKenney" wrote: > > > On Mon, Sep 18, 2017 at 09:55:27AM -0700, Paul E. McKenney wrote: > > > On Mon, Sep 18, 2017 at 12:29:31PM -0400, Steven Rostedt wrote: > > > > On Mon, 18 Sep 2017 09:24:12 -0700 > > > > "Paul E. McKenney" wrote: > > > > > > > > > > > > > As soon as I work through the backlog of lockdep complaints that > > > > > appeared in the last merge window... :-( > > > > > > > > > > sparse_irq_lock, I am looking at you!!! ;-) > > > > > > > > I just hit one too, and decided to write a patch to show a chain of 3 > > > > when applicable. > > > > > > > > For example: > > > > > > > > Chain exists of: > > > > cpu_hotplug_lock.rw_sem --> smpboot_threads_lock --> (complete)&self->parked > > > > > > > > Possible unsafe locking scenario by crosslock: > > > > > > > > CPU0 CPU1 CPU2 > > > > ---- ---- ---- > > > > lock(smpboot_threads_lock); > > > > lock((complete)&self->parked); > > > > lock(cpu_hotplug_lock.rw_sem); > > > > lock(smpboot_threads_lock); > > > > lock(cpu_hotplug_lock.rw_sem); > > > > unlock((complete)&self->parked); > > > > > > > > *** DEADLOCK *** > > > > > > > > :-) > > > > > > Nice!!! > > Note, the above lockdep splat does discover a bug. Fair enough, but I unfortunately have several other much more bizarre bugs stacked up and so I am not volunteering to fix this one. > > > My next step is reverting 12ac1d0f6c3e ("genirq: Make sparse_irq_lock > > > protect what it should protect") to see if that helps. > > > > No joy, but it is amazing how much nicer "git bisect" is when your > > failure happens deterministically within 35 seconds. ;-) > > > > The bisection converged to the range starting with 7a46ec0e2f48 > > ("locking/refcounts, x86/asm: Implement fast refcount overflow > > protection") and ending with 0c2364791343 ("Merge branch 'x86/asm' > > into locking/core"). All of these failed with an unrelated build > > error, but there was a fix that could be merged. This flagged > > d0541b0fa64b ("locking/lockdep: Make CONFIG_LOCKDEP_CROSSRELEASE part > > of CONFIG_PROVE_LOCKING"), which unfortunately does not revert cleanly. > > However, the effect of a reversion can be obtained by removing the > > selects of LOCKDEP_CROSSRELEASE and LOCKDEP_COMPLETE from > > PROVE_LOCKING, which allows recent commits to complete a short > > rcutorture test successfully. > > I don't think you want to remove those. It appears that lockdep now > covers completions, and it is uncovering a lot of bugs. Actually, I do, at least in the short term. This splat is getting in the way of my diagnostics for the other bugs. Please note that I am -not- arguing that mainline should change, at least not yet. > > So, Byungchul, any enlightenment? Please see lockdep splat below. > > Did you discover the below by reverting lockdep patches? It doesn't > really make sense. It looks to me to be about completions but not > fully covering it. No, the splat below is what I get from stock v4.14-rc1 on these rcutorture scenarios: SRCU-P, TASKS01, TREE03, and TREE05. If you would like to try it yourself, TASKS01 requires only two CPUs and the others require eight. When I suppress LOCKDEP_CROSSRELEASE and LOCKDEP_COMPLETE, I don't see anything that looks like that deadlock, but it is of course quite possible that the deadlock is very low probability -- and I did short 30-minute runs. Thanx, Paul > -- Steve > > > Thanx, Paul > > > > ------------------------------------------------------------------------ > > > > [ 35.310179] ====================================================== > > [ 35.310749] WARNING: possible circular locking dependency detected > > [ 35.310749] 4.13.0-rc4+ #1 Not tainted > > [ 35.310749] ------------------------------------------------------ > > [ 35.310749] torture_onoff/766 is trying to acquire lock: > > [ 35.313943] ((complete)&st->done){+.+.}, at: [] takedown_cpu+0x86/0xf0 > > [ 35.313943] > > [ 35.313943] but task is already holding lock: > > [ 35.313943] (sparse_irq_lock){+.+.}, at: [] irq_lock_sparse+0x12/0x20 > > [ 35.313943] > > [ 35.313943] which lock already depends on the new lock. > > [ 35.313943] > > [ 35.313943] > > [ 35.313943] the existing dependency chain (in reverse order) is: > > [ 35.313943] > > [ 35.313943] -> #1 (sparse_irq_lock){+.+.}: > > [ 35.313943] __mutex_lock+0x65/0x960 > > [ 35.313943] mutex_lock_nested+0x16/0x20 > > [ 35.313943] irq_lock_sparse+0x12/0x20 > > [ 35.313943] irq_affinity_online_cpu+0x13/0xd0 > > [ 35.313943] cpuhp_invoke_callback+0xa7/0x8b0 > > [ 35.313943] > > [ 35.313943] -> #0 ((complete)&st->done){+.+.}: > > [ 35.313943] check_prev_add+0x401/0x800 > > [ 35.313943] __lock_acquire+0x1100/0x11a0 > > [ 35.313943] lock_acquire+0x9e/0x1e0 > > [ 35.313943] wait_for_completion+0x36/0x130 > > [ 35.313943] takedown_cpu+0x86/0xf0 > > [ 35.313943] cpuhp_invoke_callback+0xa7/0x8b0 > > [ 35.313943] cpuhp_down_callbacks+0x3d/0x80 > > [ 35.313943] _cpu_down+0xbb/0xf0 > > [ 35.313943] do_cpu_down+0x39/0x50 > > [ 35.313943] cpu_down+0xb/0x10 > > [ 35.313943] torture_offline+0x75/0x140 > > [ 35.313943] torture_onoff+0x102/0x1e0 > > [ 35.313943] kthread+0x142/0x180 > > [ 35.313943] ret_from_fork+0x27/0x40 > > [ 35.313943] > > [ 35.313943] other info that might help us debug this: > > [ 35.313943] > > [ 35.313943] Possible unsafe locking scenario: > > [ 35.313943] > > [ 35.313943] CPU0 CPU1 > > [ 35.313943] ---- ---- > > [ 35.313943] lock(sparse_irq_lock); > > [ 35.313943] lock((complete)&st->done); > > [ 35.313943] lock(sparse_irq_lock); > > [ 35.313943] lock((complete)&st->done); > > [ 35.313943] > > [ 35.313943] *** DEADLOCK *** > > [ 35.313943] > > [ 35.313943] 3 locks held by torture_onoff/766: > > [ 35.313943] #0: (cpu_add_remove_lock){+.+.}, at: [] do_cpu_down+0x22/0x50 > > [ 35.313943] #1: (cpu_hotplug_lock.rw_sem){++++}, at: [] percpu_down_write+0x21/0xf0 > > [ 35.313943] #2: (sparse_irq_lock){+.+.}, at: [] irq_lock_sparse+0x12/0x20 > > [ 35.313943] > > [ 35.313943] stack backtrace: > > [ 35.313943] CPU: 7 PID: 766 Comm: torture_onoff Not tainted 4.13.0-rc4+ #1 > > [ 35.313943] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014 > > [ 35.313943] Call Trace: > > [ 35.313943] dump_stack+0x67/0x97 > > [ 35.313943] print_circular_bug+0x21d/0x330 > > [ 35.313943] ? add_lock_to_list.isra.31+0xc0/0xc0 > > [ 35.313943] check_prev_add+0x401/0x800 > > [ 35.313943] ? wake_up_q+0x70/0x70 > > [ 35.313943] __lock_acquire+0x1100/0x11a0 > > [ 35.313943] ? __lock_acquire+0x1100/0x11a0 > > [ 35.313943] ? add_lock_to_list.isra.31+0xc0/0xc0 > > [ 35.313943] lock_acquire+0x9e/0x1e0 > > [ 35.313943] ? takedown_cpu+0x86/0xf0 > > [ 35.313943] wait_for_completion+0x36/0x130 > > [ 35.313943] ? takedown_cpu+0x86/0xf0 > > [ 35.313943] ? stop_machine_cpuslocked+0xb9/0xd0 > > [ 35.313943] ? cpuhp_invoke_callback+0x8b0/0x8b0 > > [ 35.313943] ? cpuhp_complete_idle_dead+0x10/0x10 > > [ 35.313943] takedown_cpu+0x86/0xf0 > > [ 35.313943] cpuhp_invoke_callback+0xa7/0x8b0 > > [ 35.313943] cpuhp_down_callbacks+0x3d/0x80 > > [ 35.313943] _cpu_down+0xbb/0xf0 > > [ 35.313943] do_cpu_down+0x39/0x50 > > [ 35.313943] cpu_down+0xb/0x10 > > [ 35.313943] torture_offline+0x75/0x140 > > [ 35.313943] torture_onoff+0x102/0x1e0 > > [ 35.313943] kthread+0x142/0x180 > > [ 35.313943] ? torture_kthread_stopping+0x70/0x70 > > [ 35.313943] ? kthread_create_on_node+0x40/0x40 > > [ 35.313943] ret_from_fork+0x27/0x40 >