From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751644AbbHBPiX (ORCPT ); Sun, 2 Aug 2015 11:38:23 -0400 Received: from arcturus.aphlor.org ([188.246.204.175]:37781 "EHLO arcturus.aphlor.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751090AbbHBPiW (ORCPT ); Sun, 2 Aug 2015 11:38:22 -0400 Date: Sun, 2 Aug 2015 11:38:07 -0400 From: Dave Jones To: Linux Kernel , Peter Zijlstra Cc: "Paul E. McKenney" , Josh Triplett Subject: Re: unpinning an unpinned lock. (pidns/scheduler) Message-ID: <20150802153807.GA1572@codemonkey.org.uk> Mail-Followup-To: Dave Jones , Linux Kernel , Peter Zijlstra , "Paul E. McKenney" , Josh Triplett References: <20150731174353.GA25799@codemonkey.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150731174353.GA25799@codemonkey.org.uk> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Score: -2.9 (--) X-Spam-Report: Spam report generated by SpamAssassin on "arcturus.aphlor.org" Content analysis details: (-2.9 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-Authenticated-User: davej@codemonkey.org.uk Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 31, 2015 at 01:43:53PM -0400, Dave Jones wrote: > Just found a machine with this on 4.2-rc4 > > WARNING: CPU: 0 PID: 11787 at kernel/locking/lockdep.c:3497 lock_unpin_lock+0x109/0x110() > unpinning an unpinned lock > CPU: 0 PID: 11787 Comm: kworker/0:1 Not tainted 4.2.0-rc4-think+ #5 > Workqueue: events proc_cleanup_work > 0000000000000009 ffff8804f8983988 ffffffff9f7f5eed 0000000000000007 > ffff8804f89839d8 ffff8804f89839c8 ffffffff9f07b72a 00000000000000a8 > 0000000000000070 ffff8805079d5c98 0000000000000092 0000000000000002 > Call Trace: > [] dump_stack+0x4f/0x7b > [] warn_slowpath_common+0x8a/0xc0 > [] warn_slowpath_fmt+0x46/0x50 > [] lock_unpin_lock+0x109/0x110 > [] __schedule+0x39f/0xb30 > [] schedule+0x41/0x90 > [] schedule_timeout+0x33f/0x5b0 > [] ? put_lock_stats.isra.29+0xe/0x30 > [] ? mark_held_locks+0x75/0xa0 > [] ? _raw_spin_unlock_irq+0x30/0x60 > [] ? get_parent_ip+0x11/0x50 > [] wait_for_completion+0xec/0x120 > [] ? wake_up_q+0x70/0x70 > [] ? rcu_barrier+0x20/0x20 > [] wait_rcu_gp+0x68/0x90 > [] ? trace_raw_output_rcu_barrier+0x80/0x80 > [] ? wait_for_completion+0x38/0x120 > [] synchronize_rcu+0x3c/0xb0 > [] kern_unmount+0x2f/0x40 > [] pid_ns_release_proc+0x15/0x20 > [] proc_cleanup_work+0x15/0x20 > [] process_one_work+0x1f3/0x7a0 > [] ? process_one_work+0x162/0x7a0 > [] ? worker_thread+0xf9/0x470 > [] worker_thread+0x69/0x470 > [] ? preempt_count_sub+0xa3/0xf0 > [] ? process_one_work+0x7a0/0x7a0 > [] kthread+0x11f/0x140 > [] ? kthread_create_on_node+0x250/0x250 > [] ret_from_fork+0x3f/0x70 > [] ? kthread_create_on_node+0x250/0x250 > ---[ end trace e75342db87128aeb ]--- I'm hitting this a few times a day now, I'll see if I can narrow down a reproducer next week. Adding the RCU cabal to Cc. Dave