From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from relay.sgi.com (relay1.corp.sgi.com [137.38.102.111]) by oss.sgi.com (Postfix) with ESMTP id E74A929DF8 for ; Tue, 21 May 2013 11:21:42 -0500 (CDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25]) by relay1.corp.sgi.com (Postfix) with ESMTP id C46528F8040 for ; Tue, 21 May 2013 09:21:39 -0700 (PDT) Received: from mail-ie0-f181.google.com (mail-ie0-f181.google.com [209.85.223.181]) by cuda.sgi.com with ESMTP id jgdOwv4EiRvHOQvP (version=TLSv1 cipher=RC4-SHA bits=128 verify=NO) for ; Tue, 21 May 2013 09:21:35 -0700 (PDT) Received: by mail-ie0-f181.google.com with SMTP id x12so2232973ief.12 for ; Tue, 21 May 2013 09:21:35 -0700 (PDT) Received: from [192.168.1.2] ([184.52.11.124]) by mx.google.com with ESMTPSA id qr3sm26301123igb.1.2013.05.21.09.21.28 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 21 May 2013 09:21:34 -0700 (PDT) Message-ID: <519B9EF5.5060809@gmail.com> Date: Tue, 21 May 2013 12:21:09 -0400 From: "Michael L. Semon" MIME-Version: 1.0 Subject: Lockdep message on 3.9.3 (already fixed?)... List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: xfs-bounces@oss.sgi.com Sender: xfs-bounces@oss.sgi.com To: xfs@oss.sgi.com Hi! I'm beginning to lose track of lockdep messages and feel like a new message is sneaking in there. This lockdep comes from kernel 3.9.3, which I was asked to use in order to gather DRM info. The PC was booted to a console and left to do a long 24kB/s download and default distro cron duties (slocate and such), in hopes that console inactivity, console blanking, and monitor sleep would kick up a soft oops from DRM like it does on 3.10.0-rc. This may also apply to the devel kernels, but the PC needs to be left alone for me to verify this. I've read the Dave Chinner position on stable kernels but don't know if they apply to the first stable kernel out of mainline...especially because that kernel has a lifespan of a housefly nowadays... As always, thanks for reading! Michael ================================= [ INFO: inconsistent lock state ] 3.9.3 #1 Not tainted --------------------------------- inconsistent {RECLAIM_FS-ON-R} -> {IN-RECLAIM_FS-W} usage. kswapd0/18 [HC0[0]:SC0[0]:HE1:SE1] takes: (&(&ip->i_lock)->mr_lock){++++-?}, at: [] xfs_ilock+0xff/0x15e {RECLAIM_FS-ON-R} state was registered at: [] mark_held_locks+0x80/0xcb [] lockdep_trace_alloc+0x59/0x9d [] __alloc_pages_nodemask+0x70/0x6f2 [] __get_free_pages+0x1c/0x3d [] pte_alloc_one_kernel+0x14/0x16 [] __pte_alloc_kernel+0x16/0x71 [] vmap_page_range_noflush+0x12e/0x13c [] vm_map_ram+0x3b9/0x46c [] _xfs_buf_map_pages+0x5b/0xe7 [] xfs_buf_get_map+0x67/0x13a [] xfs_buf_read_map+0x1f/0xc0 [] xfs_buf_readahead_map+0x47/0x57 [] xfs_da_reada_buf+0xaf/0xbd [] xfs_dir2_data_readahead+0x2f/0x36 [] xfs_dir_open+0x7b/0x8e [] do_dentry_open.isra.16+0xf8/0x1d7 [] finish_open+0x1b/0x27 [] do_last+0x44d/0xc68 [] path_openat+0xa4/0x3cb [] do_filp_open+0x2b/0x70 [] do_sys_open+0xf5/0x1b5 [] sys_openat+0x26/0x28 [] syscall_call+0x7/0xb irq event stamp: 266081 hardirqs last enabled at (266081): [] _raw_spin_unlock_irq+0x27/0x2b hardirqs last disabled at (266080): [] _raw_spin_lock_irq+0x14/0x4b softirqs last enabled at (264480): [] __do_softirq+0x125/0x1bc softirqs last disabled at (264455): [] irq_exit+0x63/0x65 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&ip->i_lock)->mr_lock); lock(&(&ip->i_lock)->mr_lock); *** DEADLOCK *** 3 locks held by kswapd0/18: #0: (shrinker_rwsem){++++..}, at: [] shrink_slab+0x2f/0x29b #1: (&type->s_umount_key#18){++++.+}, at: [] grab_super_passive+0x38/0x72 #2: (&pag->pag_ici_reclaim_lock){+.+...}, at: [] xfs_reclaim_inodes_ag+0xb4/0x37f stack backtrace: Pid: 18, comm: kswapd0 Not tainted 3.9.3 #1 Call Trace: [] print_usage_bug+0x1dc/0x1e6 [] ? check_usage_backwards+0xea/0xea [] mark_lock+0x245/0x25c [] __lock_acquire+0x5da/0x1557 [] ? finish_task_switch.constprop.80+0x3b/0xb9 [] ? __schedule+0x2ae/0x5e5 [] lock_acquire+0x7f/0xdc [] ? xfs_ilock+0xff/0x15e [] down_write_nested+0x41/0x61 [] ? xfs_ilock+0xff/0x15e [] xfs_ilock+0xff/0x15e [] xfs_reclaim_inode+0xf4/0x30e [] xfs_reclaim_inodes_ag+0x26a/0x37f [] ? xfs_reclaim_inodes_ag+0xdd/0x37f [] ? trace_hardirqs_on_caller+0xe8/0x160 [] ? trace_hardirqs_on+0xb/0xd [] ? try_to_wake_up+0xe1/0x122 [] ? wake_up_process+0x1f/0x33 [] ? xfs_ail_push+0x68/0x6f [] ? xfs_ail_push_all+0x53/0x6a [] xfs_reclaim_inodes_nr+0x2d/0x33 [] xfs_fs_free_cached_objects+0x13/0x15 [] prune_super+0xd1/0x15c [] shrink_slab+0x143/0x29b [] kswapd+0x54b/0x794 [] ? try_to_free_pages+0x61f/0x61f [] kthread+0x9e/0xa0 [] ret_from_kernel_thread+0x1b/0x28 [] ? __kthread_parkme+0x5b/0x5b _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs