From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Zijlstra Subject: Re: mmu_notifiers: turn off lockdep around mm_take_all_locks Date: Tue, 07 Jul 2009 20:15:05 +0200 Message-ID: <1246990505.5197.2.camel@laptop> References: <20090707180630.GA8008@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 7bit Cc: Andrea Arcangeli , Peter Zijlstra , kvm , Avi Kivity To: Marcelo Tosatti Return-path: Received: from bombadil.infradead.org ([18.85.46.34]:43798 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754946AbZGGSPK (ORCPT ); Tue, 7 Jul 2009 14:15:10 -0400 In-Reply-To: <20090707180630.GA8008@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, 2009-07-07 at 15:06 -0300, Marcelo Tosatti wrote: > KVM guests with CONFIG_LOCKDEP=y trigger the following warning: > > BUG: MAX_LOCK_DEPTH too low! > turning off the locking correctness validator. > Pid: 4624, comm: qemu-system-x86 Not tainted 2.6.31-rc2-03981-g3abaf21 > #32 > Call Trace: > [] __lock_acquire+0x1559/0x15fc > [] ? mm_take_all_locks+0x99/0x109 > [] ? mm_take_all_locks+0x99/0x109 > [] lock_acquire+0xee/0x112 > [] ? mm_take_all_locks+0xd6/0x109 > [] ? _spin_lock_nest_lock+0x20/0x50 > [] _spin_lock_nest_lock+0x41/0x50 > [] ? mm_take_all_locks+0xd6/0x109 > [] mm_take_all_locks+0xd6/0x109 > [] do_mmu_notifier_register+0xd4/0x199 > [] mmu_notifier_register+0x13/0x15 > [] kvm_dev_ioctl+0x13f/0x30e [kvm] > [] vfs_ioctl+0x2f/0x7d > [] do_vfs_ioctl+0x4af/0x4ec > [] ? error_exit+0x94/0xb0 > [] ? trace_hardirqs_off_thunk+0x3a/0x3c > [] ? retint_swapgs+0xe/0x13 > [] sys_ioctl+0x47/0x6a > [] ? __up_read+0x1a/0x85 > [] system_call_fastpath+0x16/0x1b > > Since mm_take_all_locks takes a gazillion locks. > > Is there any way around this other than completly shutting down lockdep? When we created this the promise was that kvm would only do this on a fresh mm with only a few vmas, has that changed?