From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sean Christopherson Subject: Re: [PATCH v2 RFC] x86/kvm/mmu: make mmu->prev_roots cache work for NPT case Date: Thu, 7 Mar 2019 11:02:25 -0800 Message-ID: <20190307190225.GD4986@linux.intel.com> References: <0c13c94e-3226-8bfd-2dc7-c75aad1c03a2@redhat.com> <20190223111552.27221-1-vkuznets@redhat.com> <87ftrylqwm.fsf@vitty.brq.redhat.com> <20190307160633.GA4986@linux.intel.com> <87a7i6ljr0.fsf@vitty.brq.redhat.com> <20190307185946.GC4986@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Paolo Bonzini , Radim =?utf-8?B?S3LEjW3DocWZ?= , kvm@vger.kernel.org, Junaid Shahid , linux-kernel@vger.kernel.org To: Vitaly Kuznetsov Return-path: Content-Disposition: inline In-Reply-To: <20190307185946.GC4986@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On Thu, Mar 07, 2019 at 10:59:46AM -0800, Sean Christopherson wrote: > I think what we could do is repurpose role's nxe, cr0_wp, and > sm{a,e}p_andnot_wp bits to uniquely identify a nested EPT/NPT entry. Ignore the "NPT" comment, this would only apply to EPT. > E.g. cr0_wp=1 and sm{a,e}p_andnot_wp=1 are an impossible combination. > I'll throw together a patch to see what breaks. In fact, I think we > could revamp kvm_calc_shadow_ept_root_page_role() to completely ignore > all legacy paging bits, i.e. handling changes in L2's configuration is > L1's problem.