From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: KVM: MMU: remove prefault from invlpg handler Date: Sat, 05 Dec 2009 19:04:27 +0200 Message-ID: <4B1A929B.6050807@redhat.com> References: <20091205143411.GA16237@amt.cnet> <4B1A9108.6090000@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm To: Marcelo Tosatti Return-path: Received: from mx1.redhat.com ([209.132.183.28]:38515 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757753AbZLEREX (ORCPT ); Sat, 5 Dec 2009 12:04:23 -0500 Received: from int-mx08.intmail.prod.int.phx2.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id nB5H4TjV032458 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Sat, 5 Dec 2009 12:04:30 -0500 In-Reply-To: <4B1A9108.6090000@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 12/05/2009 06:57 PM, Avi Kivity wrote: > > Looks like we considered this, since kvm_read_guest_atomic() is only > needed if inside the spinlock, but some other change moved the > spin_unlock() upwards. Will investigate history. > No, the bug was there from day one (and survived a year): + spin_lock(&vcpu->kvm->mmu_lock); walk_shadow(&walker.walker, vcpu, gva); + spin_unlock(&vcpu->kvm->mmu_lock); + if (walker.pte_gpa == -1) + return; + if (kvm_read_guest_atomic(vcpu->kvm, walker.pte_gpa, &gpte, + sizeof(pt_element_t))) + return; + if (is_present_pte(gpte) && (gpte & PT_ACCESSED_MASK)) { + if (mmu_topup_memory_caches(vcpu)) + return; + kvm_mmu_pte_write(vcpu, walker.pte_gpa, (const u8 *)&gpte, + sizeof(pt_element_t), 0); + } -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.