From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759060AbWLDIJD (ORCPT ); Mon, 4 Dec 2006 03:09:03 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1759068AbWLDIJD (ORCPT ); Mon, 4 Dec 2006 03:09:03 -0500 Received: from smtp.ustc.edu.cn ([202.38.64.16]:12695 "HELO ustc.edu.cn") by vger.kernel.org with SMTP id S1759060AbWLDIJB (ORCPT ); Mon, 4 Dec 2006 03:09:01 -0500 Message-ID: <365219737.01594@ustc.edu.cn> X-EYOUMAIL-SMTPAUTH: wfg@mail.ustc.edu.cn Date: Mon, 4 Dec 2006 16:09:02 +0800 From: Fengguang Wu To: Ingo Molnar Cc: linux-kernel@vger.kernel.org, akpm@osdl.org Subject: drop_pagecache: Possible circular locking dependency Message-ID: <20061204080902.GA5725@mail.ustc.edu.cn> Mail-Followup-To: Ingo Molnar , linux-kernel@vger.kernel.org, akpm@osdl.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-GPG-Fingerprint: 53D2 DDCE AB5C 8DC6 188B 1CB1 F766 DA34 8D8B 1C6D User-Agent: Mutt/1.5.13 (2006-08-11) Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Hi Ingo, Got the following message when doing some benchmarks. I guess we should not hold inode_lock on calling invalidate_inode_pages(). Any ideas? Fengguang Wu ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.19-rc6-mm2 #3 ------------------------------------------------------- rabench.sh/7467 is trying to acquire lock: (&journal->j_list_lock){--..}, at: [] journal_try_to_free_buffers+0xdc/0x1c0 but task is already holding lock: (inode_lock){--..}, at: [] drop_pagecache+0x67/0x120 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (inode_lock){--..}: [] add_lock_to_list+0x8f/0xd0 [] __lock_acquire+0xb29/0xd40 [] lock_acquire+0x93/0xc0 [] _spin_lock+0x25/0x40 [] __mark_inode_dirty+0x101/0x190 [] __set_page_dirty_nobuffers+0x150/0x170 [] mark_buffer_dirty+0x1e/0x30 [] __journal_temp_unlink_buffer+0x1a8/0x1c0 [] __journal_unfile_buffer+0x11/0x20 [] __journal_refile_buffer+0x87/0x120 [] journal_commit_transaction+0x1005/0x1300 [] kjournald+0xd5/0x230 [] kthread+0xda/0x110 [] child_rip+0xa/0x12 [] 0xffffffffffffffff -> #0 (&journal->j_list_lock){--..}: [] print_circular_bug_tail+0x55/0xb0 [] __lock_acquire+0xa0e/0xd40 [] lock_acquire+0x93/0xc0 [] _spin_lock+0x25/0x40 [] journal_try_to_free_buffers+0xdc/0x1c0 [] ext3_releasepage+0xa9/0xd0 [] try_to_release_page+0x5a/0x80 [] invalidate_mapping_pages+0xa8/0x150 [] invalidate_inode_pages+0x12/0x20 [] drop_pagecache+0xa5/0x120 [] drop_caches_sysctl_handler+0x22/0x80 [] do_rw_proc+0xe7/0x150 [] proc_writesys+0x1a/0x20 [] vfs_write+0xf4/0x1b0 [] sys_write+0x50/0xa0 [] system_call+0x7e/0x83 [<00002af654962422>] 0x2af654962422 [] 0xffffffffffffffff other info that might help us debug this: 2 locks held by rabench.sh/7467: #0: (&type->s_umount_key#16){----}, at: [] drop_pagecache+0x53/0x120 #1: (inode_lock){--..}, at: [] drop_pagecache+0x67/0x120 stack backtrace: Call Trace: [] dump_trace+0xb3/0x460 [] show_trace+0x43/0x70 [] dump_stack+0x15/0x20 [] print_circular_bug_tail+0x96/0xb0 [] __lock_acquire+0xa0e/0xd40 [] lock_acquire+0x93/0xc0 [] _spin_lock+0x25/0x40 [] journal_try_to_free_buffers+0xdc/0x1c0 [] ext3_releasepage+0xa9/0xd0 [] try_to_release_page+0x5a/0x80 [] invalidate_mapping_pages+0xa8/0x150 [] invalidate_inode_pages+0x12/0x20 [] drop_pagecache+0xa5/0x120 [] drop_caches_sysctl_handler+0x22/0x80 [] do_rw_proc+0xe7/0x150 [] proc_writesys+0x1a/0x20 [] vfs_write+0xf4/0x1b0 [] sys_write+0x50/0xa0 [] system_call+0x7e/0x83 [<00002af654962422>] 2 locks held by rabench.sh/7467: #0: (&type->s_umount_key#16){----}, at: [] drop_pagecache+0x53/0x120 #1: (inode_lock){--..}, at: [] drop_pagecache+0x67/0x120