From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755057Ab1H2Usg (ORCPT ); Mon, 29 Aug 2011 16:48:36 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49363 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754743Ab1H2Usf (ORCPT ); Mon, 29 Aug 2011 16:48:35 -0400 Date: Mon, 29 Aug 2011 16:48:30 -0400 From: Dave Jones To: Linux Kernel Cc: tytso@mit.edu, adilger.kernel@dilger.ca Subject: Re: ext4 lockdep trace (3.1.0rc3) Message-ID: <20110829204830.GA18543@redhat.com> Mail-Followup-To: Dave Jones , Linux Kernel , tytso@mit.edu, adilger.kernel@dilger.ca References: <20110826214930.GA21818@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20110826214930.GA21818@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 26, 2011 at 05:49:30PM -0400, Dave Jones wrote: > just hit this while building a kernel. Laptop wedged for a few seconds > during the final link, and this was in the log when it unwedged. I still see this in rc4, and can reproduce it reliably every time I build. It only started happening in the last week. I don't see any ext4 or vfs commits within a few days of that, so I'm not sure why it only just begun (I do daily builds, and the 26th was the first time I saw it appear) Given the lack of obvious commits in that timeframe, I'm not sure a bisect is going to be particularly fruitful. It might just be that my IO patterns changed ? (I did do some ccache changes around then). Dave > ================================= > [ INFO: inconsistent lock state ] > 3.1.0-rc3+ #148 > --------------------------------- > inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage. > kswapd0/32 [HC0[0]:SC0[0]:HE1:SE1] takes: > (&sb->s_type->i_mutex_key#14){+.+.?.}, at: [] ext4_evict_inode+0x76/0x33c > {RECLAIM_FS-ON-W} state was registered at: > [] mark_held_locks+0x6d/0x95 > [] lockdep_trace_alloc+0x9f/0xc2 > [] slab_pre_alloc_hook+0x1e/0x4f > [] kmem_cache_alloc+0x29/0x15a > [] __d_alloc+0x26/0x168 > [] d_alloc+0x1f/0x62 > [] d_alloc_and_lookup+0x2c/0x6b > [] walk_component+0x215/0x3e8 > [] link_path_walk+0x189/0x43b > [] path_lookupat+0x5a/0x2af > [] do_path_lookup+0x28/0x97 > [] user_path_at+0x59/0x96 > [] vfs_fstatat+0x44/0x6e > [] vfs_stat+0x1b/0x1d > [] sys_newstat+0x1a/0x33 > [] system_call_fastpath+0x16/0x1b > irq event stamp: 671039 > hardirqs last enabled at (671039): [] __call_rcu+0x18c/0x19d > hardirqs last disabled at (671038): [] __call_rcu+0x82/0x19d > softirqs last enabled at (670754): [] __do_softirq+0x1fd/0x257 > softirqs last disabled at (670749): [] call_softirq+0x1c/0x30 > > other info that might help us debug this: > Possible unsafe locking scenario: > > CPU0 > ---- > lock(&sb->s_type->i_mutex_key); > > lock(&sb->s_type->i_mutex_key); > > *** DEADLOCK *** > > 2 locks held by kswapd0/32: > #0: (shrinker_rwsem){++++..}, at: [] shrink_slab+0x39/0x2ef > #1: (&type->s_umount_key#21){++++..}, at: [] grab_super_passive+0x57/0x7b > > stack backtrace: > Pid: 32, comm: kswapd0 Tainted: G W 3.1.0-rc3+ #148 > Call Trace: > [] ? up+0x39/0x3e > [] print_usage_bug+0x1e7/0x1f8 > [] ? save_stack_trace+0x2c/0x49 > [] ? print_irq_inversion_bug.part.19+0x1a0/0x1a0 > [] mark_lock+0x106/0x220 > [] __lock_acquire+0x394/0xcf7 > [] ? save_stack_trace+0x2c/0x49 > [] ? __bfs+0x137/0x1c7 > [] ? ext4_evict_inode+0x76/0x33c > [] lock_acquire+0xf3/0x13e > [] ? ext4_evict_inode+0x76/0x33c > [] ? __mutex_lock_common+0x3d/0x44a > [] ? mutex_lock_nested+0x3b/0x40 > [] ? ext4_evict_inode+0x76/0x33c > [] __mutex_lock_common+0x65/0x44a > [] ? ext4_evict_inode+0x76/0x33c > [] ? local_clock+0x35/0x4c > [] ? evict+0x8b/0x153 > [] ? put_lock_stats+0xe/0x29 > [] ? lock_release_holdtime.part.10+0x59/0x62 > [] ? evict+0x8b/0x153 > [] mutex_lock_nested+0x3b/0x40 > [] ext4_evict_inode+0x76/0x33c > [] evict+0x99/0x153 > [] dispose_list+0x32/0x43 > [] prune_icache_sb+0x257/0x266 > [] prune_super+0xda/0x145 > [] shrink_slab+0x19e/0x2ef > [] balance_pgdat+0x2e7/0x57e > [] kswapd+0x339/0x392 > [] ? __init_waitqueue_head+0x4b/0x4b > [] ? balance_pgdat+0x57e/0x57e > [] kthread+0xa8/0xb0 > [] ? sub_preempt_count+0xa1/0xb4 > [] kernel_thread_helper+0x4/0x10 > [] ? retint_restore_args+0x13/0x13 > [] ? __init_kthread_worker+0x5a/0x5a > [] ? gs_change+0x13/0x13 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ ---end quoted text---