From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Hildenbrand Subject: [PATCH v1 02/13] KVM: x86: mmu: use for_each_shadow_entry_lockless() Date: Fri, 4 Aug 2017 15:14:17 +0200 Message-ID: <20170804131428.15844-3-david@redhat.com> References: <20170804131428.15844-1-david@redhat.com> Cc: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , david@redhat.com To: kvm@vger.kernel.org Return-path: Received: from mx1.redhat.com ([209.132.183.28]:57526 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752541AbdHDNOi (ORCPT ); Fri, 4 Aug 2017 09:14:38 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E938F6868E for ; Fri, 4 Aug 2017 13:14:37 +0000 (UTC) In-Reply-To: <20170804131428.15844-1-david@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Certainly better to read. Signed-off-by: David Hildenbrand --- arch/x86/kvm/mmu.c | 21 ++++++++------------- 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 9ed26cc..3769613 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3596,8 +3596,8 @@ static bool walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) { struct kvm_shadow_walk_iterator iterator; - u64 sptes[PT64_ROOT_LEVEL], spte = 0ull; - int root, leaf; + u64 sptes[PT64_ROOT_LEVEL] = { 0 }, spte = 0ull; + int level; bool reserved = false; if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) @@ -3605,14 +3605,8 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) walk_shadow_page_lockless_begin(vcpu); - for (shadow_walk_init(&iterator, vcpu, addr), - leaf = root = iterator.level; - shadow_walk_okay(&iterator); - __shadow_walk_next(&iterator, spte)) { - spte = mmu_spte_get_lockless(iterator.sptep); - - sptes[leaf - 1] = spte; - leaf--; + for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) { + sptes[iterator.level - 1] = spte; if (!is_shadow_present_pte(spte)) break; @@ -3626,10 +3620,11 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) if (reserved) { pr_err("%s: detect reserved bits on spte, addr 0x%llx, dump hierarchy:\n", __func__, addr); - while (root > leaf) { + for (level = PT64_ROOT_LEVEL; level > 0; level--) { + if (!sptes[level - 1]) + continue; pr_err("------ spte 0x%llx level %d.\n", - sptes[root - 1], root); - root--; + sptes[level - 1], level); } } exit: -- 2.9.4