From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH v2 12/13] KVM: x86: add SMM to the MMU role, support SMRAM address space Date: Tue, 09 Jun 2015 12:01:20 +0800 Message-ID: <55766510.7010108@linux.intel.com> References: <1432746314-50196-1-git-send-email-pbonzini@redhat.com> <1432746314-50196-13-git-send-email-pbonzini@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: rkrcmar@redhat.com, bdas@redhat.com To: Paolo Bonzini , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Return-path: In-Reply-To: <1432746314-50196-13-git-send-email-pbonzini@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 05/28/2015 01:05 AM, Paolo Bonzini wrote: > This is now very simple to do. The only interesting part is a simple > trick to find the right memslot in gfn_to_rmap, retrieving the address > space from the spte role word. The same trick is used in the auditing > code. > > The comment on top of union kvm_mmu_page_role has been stale forever, Fortunately, we have documented these fields in mmu.txt, please do it for 'smm' as well. :) > so remove it. Speaking of stale code, remove pad_for_nice_hex_output > too: it was splitting the "access" bitfield across two bytes and thus > had effectively turned into pad_for_ugly_hex_output. > > Signed-off-by: Paolo Bonzini > --- > v1->v2: new > > arch/x86/include/asm/kvm_host.h | 26 +++++++++++++++----------- > arch/x86/kvm/mmu.c | 15 ++++++++++++--- > arch/x86/kvm/mmu_audit.c | 10 +++++++--- > arch/x86/kvm/x86.c | 2 ++ > 4 files changed, 36 insertions(+), 17 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 5a5e13af6e03..47006683f2fe 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -184,23 +184,12 @@ struct kvm_mmu_memory_cache { > void *objects[KVM_NR_MEM_OBJS]; > }; > > -/* > - * kvm_mmu_page_role, below, is defined as: > - * > - * bits 0:3 - total guest paging levels (2-4, or zero for real mode) > - * bits 4:7 - page table level for this shadow (1-4) > - * bits 8:9 - page table quadrant for 2-level guests > - * bit 16 - direct mapping of virtual to physical mapping at gfn > - * used for real mode and two-dimensional paging > - * bits 17:19 - common access permissions for all ptes in this shadow page > - */ > union kvm_mmu_page_role { > unsigned word; > struct { > unsigned level:4; > unsigned cr4_pae:1; > unsigned quadrant:2; > - unsigned pad_for_nice_hex_output:6; > unsigned direct:1; > unsigned access:3; > unsigned invalid:1; > @@ -208,6 +197,15 @@ union kvm_mmu_page_role { > unsigned cr0_wp:1; > unsigned smep_andnot_wp:1; > unsigned smap_andnot_wp:1; > + unsigned :8; > + > + /* > + * This is left at the top of the word so that > + * kvm_memslots_for_spte_role can extract it with a > + * simple shift. While there is room, give it a whole > + * byte so it is also faster to load it from memory. > + */ > + unsigned smm:8; I suspect if we really need this trick, smm is not the hottest filed in this struct anyway. Otherwise looks good to me: Reviewed-by: Xiao Guangrong