From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: mmu_notifiers: turn off lockdep around mm_take_all_locks Date: Tue, 7 Jul 2009 15:06:30 -0300 Message-ID: <20090707180630.GA8008@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm To: Andrea Arcangeli , Peter Zijlstra Return-path: Received: from mx2.redhat.com ([66.187.237.31]:36788 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751740AbZGGSGt (ORCPT ); Tue, 7 Jul 2009 14:06:49 -0400 Content-Disposition: inline Sender: kvm-owner@vger.kernel.org List-ID: KVM guests with CONFIG_LOCKDEP=y trigger the following warning: BUG: MAX_LOCK_DEPTH too low! turning off the locking correctness validator. Pid: 4624, comm: qemu-system-x86 Not tainted 2.6.31-rc2-03981-g3abaf21 #32 Call Trace: [] __lock_acquire+0x1559/0x15fc [] ? mm_take_all_locks+0x99/0x109 [] ? mm_take_all_locks+0x99/0x109 [] lock_acquire+0xee/0x112 [] ? mm_take_all_locks+0xd6/0x109 [] ? _spin_lock_nest_lock+0x20/0x50 [] _spin_lock_nest_lock+0x41/0x50 [] ? mm_take_all_locks+0xd6/0x109 [] mm_take_all_locks+0xd6/0x109 [] do_mmu_notifier_register+0xd4/0x199 [] mmu_notifier_register+0x13/0x15 [] kvm_dev_ioctl+0x13f/0x30e [kvm] [] vfs_ioctl+0x2f/0x7d [] do_vfs_ioctl+0x4af/0x4ec [] ? error_exit+0x94/0xb0 [] ? trace_hardirqs_off_thunk+0x3a/0x3c [] ? retint_swapgs+0xe/0x13 [] sys_ioctl+0x47/0x6a [] ? __up_read+0x1a/0x85 [] system_call_fastpath+0x16/0x1b Since mm_take_all_locks takes a gazillion locks. Is there any way around this other than completly shutting down lockdep? diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 5f4ef02..0c43cae 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -148,6 +148,8 @@ static int do_mmu_notifier_register(struct mmu_notifier *mn, struct mmu_notifier_mm *mmu_notifier_mm; int ret; + lockdep_off(); + BUG_ON(atomic_read(&mm->mm_users) <= 0); ret = -ENOMEM; @@ -189,6 +191,7 @@ out_cleanup: kfree(mmu_notifier_mm); out: BUG_ON(atomic_read(&mm->mm_users) <= 0); + lockdep_on(); return ret; }