From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH 0/3] KVM: Make kvm_lock non-raw Date: Sun, 22 Sep 2013 10:53:14 +0200 Message-ID: <523EAFFA.6060203@redhat.com> References: <1379340373-5135-1-git-send-email-pbonzini@redhat.com> <20130922074238.GG25202@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: linux-kernel@vger.kernel.org, Paul Gortmaker , kvm@vger.kernel.org, jan.kiszka@siemens.com To: Gleb Natapov Return-path: In-Reply-To: <20130922074238.GG25202@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org Il 22/09/2013 09:42, Gleb Natapov ha scritto: > On Mon, Sep 16, 2013 at 04:06:10PM +0200, Paolo Bonzini wrote: >> Paul Gortmaker reported a BUG on preempt-rt kernels, due to taking the >> mmu_lock within the raw kvm_lock in mmu_shrink_scan. He provided a >> patch that shrunk the kvm_lock critical section so that the mmu_lock >> critical section does not nest with it, but in the end there is no reason >> for the vm_list to be protected by a raw spinlock. Only manipulations >> of kvm_usage_count and the consequent hardware_enable/disable operations >> are not preemptable. >> >> This small series thus splits the kvm_lock in the "raw" part and the >> "non-raw" part. >> >> Paul, could you please provide your Tested-by? >> > Reviewed-by: Gleb Natapov > > But why should it go to stable? It is a regression from before the kvm_lock was made raw. Secondarily, it takes a much longer time before a patch hits -rt trees (can even be as much as a year) and this patch does nothing on non-rt trees. So without putting it into stable it would get no actual coverage. Paolo