From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH for 3.3] KVM: Fix write protection race during dirty logging Date: Mon, 06 Feb 2012 11:53:26 +0800 Message-ID: <4F2F4EB6.4040105@linux.vnet.ibm.com> References: <20120205204241.54f16069c7f10ac7524c55ec@gmail.com> <4F2F4B99.6050902@linux.vnet.ibm.com> <4F2F4D06.8020303@oss.ntt.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Takuya Yoshikawa , avi@redhat.com, mtosatti@redhat.com, kvm@vger.kernel.org To: Takuya Yoshikawa Return-path: Received: from e23smtp02.au.ibm.com ([202.81.31.144]:56335 "EHLO e23smtp02.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753204Ab2BFDxz (ORCPT ); Sun, 5 Feb 2012 22:53:55 -0500 Received: from /spool/local by e23smtp02.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 6 Feb 2012 03:38:05 +1000 Received: from d23av02.au.ibm.com (d23av02.au.ibm.com [9.190.235.138]) by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q163mX3R3379372 for ; Mon, 6 Feb 2012 14:48:33 +1100 Received: from d23av02.au.ibm.com (loopback [127.0.0.1]) by d23av02.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q163rSlF025092 for ; Mon, 6 Feb 2012 14:53:29 +1100 In-Reply-To: <4F2F4D06.8020303@oss.ntt.co.jp> Sender: kvm-owner@vger.kernel.org List-ID: On 02/06/2012 11:46 AM, Takuya Yoshikawa wrote: > (2012/02/06 12:40), Xiao Guangrong wrote: >> On 02/05/2012 07:42 PM, Takuya Yoshikawa wrote: >> >>> From: Takuya Yoshikawa >>> >>> This patch fixes a race introduced by: >>> >>> commit 95d4c16ce78cb6b7549a09159c409d52ddd18dae >>> KVM: Optimize dirty logging by rmap_write_protect() >>> >>> During protecting pages for dirty logging, other threads may also try >>> to protect a page in mmu_sync_children() or kvm_mmu_get_page(). >>> >>> In such a case, because get_dirty_log releases mmu_lock before flushing >>> TLB's, the following race condition can happen: >>> >>> A (get_dirty_log) B (another thread) >>> >>> lock(mmu_lock) >>> clear pte.w >>> unlock(mmu_lock) >>> lock(mmu_lock) >>> pte.w is already cleared >>> unlock(mmu_lock) >>> skip TLB flush >>> return >>> ... >>> TLB flush >>> >>> Though thread B assumes the page has already been protected when it >>> returns, the remaining TLB entry will break that assumption. >>> >>> This patch fixes this problem by making get_dirty_log hold the mmu_lock >>> until it flushes the TLB's. >>> >> >> >> I do not think this is a problem since the dirty page is logged when >> the writeable spte is being set, and in the end of get_dirty_log, all >> TLBs are always flushed. >> > > The victim is not GET_DIRTY_LOG but thread B; it needs to assure the page > is protected before returning. > Ah, right! Reviewed-by: Xiao Guangrong