From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: mmu_notifiers: turn off lockdep around mm_take_all_locks Date: Tue, 07 Jul 2009 21:18:36 +0300 Message-ID: <4A53917C.6080208@redhat.com> References: <20090707180630.GA8008@amt.cnet> <1246990505.5197.2.camel@laptop> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , Andrea Arcangeli , Peter Zijlstra , kvm To: Peter Zijlstra Return-path: Received: from mx2.redhat.com ([66.187.237.31]:44610 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752558AbZGGSSf (ORCPT ); Tue, 7 Jul 2009 14:18:35 -0400 In-Reply-To: <1246990505.5197.2.camel@laptop> Sender: kvm-owner@vger.kernel.org List-ID: On 07/07/2009 09:15 PM, Peter Zijlstra wrote: > On Tue, 2009-07-07 at 15:06 -0300, Marcelo Tosatti wrote: > >> KVM guests with CONFIG_LOCKDEP=y trigger the following warning: >> >> BUG: MAX_LOCK_DEPTH too low! >> turning off the locking correctness validator. >> Pid: 4624, comm: qemu-system-x86 Not tainted 2.6.31-rc2-03981-g3abaf21 >> #32 >> Call Trace: >> [] __lock_acquire+0x1559/0x15fc >> [] ? mm_take_all_locks+0x99/0x109 >> [] ? mm_take_all_locks+0x99/0x109 >> [] lock_acquire+0xee/0x112 >> [] ? mm_take_all_locks+0xd6/0x109 >> [] ? _spin_lock_nest_lock+0x20/0x50 >> [] _spin_lock_nest_lock+0x41/0x50 >> [] ? mm_take_all_locks+0xd6/0x109 >> [] mm_take_all_locks+0xd6/0x109 >> [] do_mmu_notifier_register+0xd4/0x199 >> [] mmu_notifier_register+0x13/0x15 >> [] kvm_dev_ioctl+0x13f/0x30e [kvm] >> [] vfs_ioctl+0x2f/0x7d >> [] do_vfs_ioctl+0x4af/0x4ec >> [] ? error_exit+0x94/0xb0 >> [] ? trace_hardirqs_off_thunk+0x3a/0x3c >> [] ? retint_swapgs+0xe/0x13 >> [] sys_ioctl+0x47/0x6a >> [] ? __up_read+0x1a/0x85 >> [] system_call_fastpath+0x16/0x1b >> >> Since mm_take_all_locks takes a gazillion locks. >> >> Is there any way around this other than completly shutting down lockdep? >> > > When we created this the promise was that kvm would only do this on a > fresh mm with only a few vmas, has that changed The number of vmas did increase, but not materially. We do link with more shared libraries though. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.