From mboxrd@z Thu Jan 1 00:00:00 1970 From: Vitaly Kuznetsov Subject: Re: [PATCH] x86/kvm/mmu: make mmu->prev_roots cache work for NPT case Date: Fri, 22 Feb 2019 19:49:35 +0100 Message-ID: <87tvgvslog.fsf@vitty.brq.redhat.com> References: <20190222164616.13859-1-vkuznets@redhat.com> Mime-Version: 1.0 Content-Type: text/plain Cc: Radim =?utf-8?B?S3LEjW3DocWZ?= , Junaid Shahid , linux-kernel@vger.kernel.org To: Paolo Bonzini , kvm@vger.kernel.org Return-path: In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org Paolo Bonzini writes: > On 22/02/19 17:46, Vitaly Kuznetsov wrote: >> I noticed that fast_cr3_switch() always fails when we switch back from L2 >> to L1 as it is not able to find a cached root. This is odd: host's CR3 >> usually stays the same, we expect to always follow the fast path. Turns >> out the problem is that page role is always mismatched because >> kvm_mmu_get_page() filters out cr4_pae when direct, the value is stored >> in page header and later compared with new_role in cached_root_available(). >> As cr4_pae is always set in long mode prev_roots cache is dysfunctional. > > Really cr4_pae means "are the PTEs 8 bytes". So I think your patch is > correct but on top we should set it to 1 (not zero!!) for > kvm_calc_shadow_ept_root_page_role, init_kvm_nested_mmu and > kvm_calc_tdp_mmu_root_page_role. Or maybe everything breaks with that > change. > Yes, exactly. If we put '1' there kvm_mmu_get_page() will again filter it out and we won't be able to find the root in prev_roots cache :-( >> - Do not clear cr4_pae in kvm_mmu_get_page() and check direct on call sites >> (detect_write_misaligned(), get_written_sptes()). > > These only run with shadow page tables, by the way. > Yes, and that's why I think it may make sense to move the filtering logic there. At least in other cases cr4_pae will always be equal to is_pae(). It seems I know too little about shadow paging and all these corner cases :-( -- Vitaly