From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Zijlstra Subject: Re: mmu_notifiers: turn off lockdep around mm_take_all_locks Date: Tue, 07 Jul 2009 21:04:02 +0200 Message-ID: <1246993442.5197.15.camel@laptop> References: <20090707180630.GA8008@amt.cnet> <1246990505.5197.2.camel@laptop> <4A53917C.6080208@redhat.com> <20090707183741.GA8393@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 7bit Cc: Avi Kivity , Andrea Arcangeli , kvm , Linus Torvalds , Ingo Molnar To: Marcelo Tosatti Return-path: Received: from bombadil.infradead.org ([18.85.46.34]:56623 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751793AbZGGTEb (ORCPT ); Tue, 7 Jul 2009 15:04:31 -0400 In-Reply-To: <20090707183741.GA8393@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, 2009-07-07 at 15:37 -0300, Marcelo Tosatti wrote: > >>> > >>> Is there any way around this other than completly shutting down lockdep? > >>> > >> > >> When we created this the promise was that kvm would only do this on a > >> fresh mm with only a few vmas, has that changed > > > > The number of vmas did increase, but not materially. We do link with > > more shared libraries though. > > Yeah, see attached /proc/pid/maps just before the ioctl thats ends up in > mmu_notifier_register. > > mm_take_all_locks: file_vma=79 anon_vma=40 Another issue, at about >=256 vmas we'll overflow the preempt count. So disabling lockdep will only 'fix' this for a short while, until you've bloated beyond that ;-) Although you could possibly disable preemption and use __raw_spin_lock(), that would also side-step the whole lockdep issue, but it feels like such a horrid hack. Alternatively we would have to modify the rmap locking, but that would incur overhead on the regular code paths, so that's probably not worth the trade-off. Linus, Ingo, any opinions?