From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 06A3C30E83A; Tue, 12 May 2026 01:40:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778550028; cv=none; b=Va5ZfFlXwxQpYTsBPgLff4VRhDymujvHT8NC/wzuUUSmnVusMXR26b/4MvmWQ0fHtjrdc4IiZZ1ReEVUsVsOYSNCMaXNWIfpMzfPIszgEEluIdMgHuCNNtE5so4Wwqi4PKPbX8cwXYYRnxUs+LVWNV/3n9yFj6xUSJGLbEKawLY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778550028; c=relaxed/simple; bh=17SJZN9erCEFmYGU07Hu1UJh0ymOBvqaeUSqVQ3irE4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Z5fStcok4V4sRyTDQqYo9JaNIpDRzYqdDBZJf8a+LMPoevSYDDqmxBYmZlNdxKAO0X2zlGlhVj1wXMd4JqlH4d9GhAJIU2N5J0yd1kMUbV++bQyh56jn5HuAAywLqSMJiRWaF5tiN3gDzwldSI34Sv59pR6Uo/00zu3iLzuy8xA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LIFoTuYp; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LIFoTuYp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1778550027; x=1810086027; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=17SJZN9erCEFmYGU07Hu1UJh0ymOBvqaeUSqVQ3irE4=; b=LIFoTuYpQc07B2rkhabUuV7i8zX7zn4Ro4/3bKFI8awsetv0W45C7/8g GLzzUA515M/KIyR0sBdnt1Z5kYN+E89DD3cjgcBLy+rgwFoctx5eng18X xRGZ3fD/gEo0LQV/Y2qGehXyIuEjUXAPYu6LdykhNQ5dw7lix3BkwCwjN Poa75/NI2XgUFFuHL2JUA6MHtJfZRt8YMVQ0kH7spcCq80eVymrrooMv1 F5o+ZgDS9QhTZizjvOWO8zspECqNVvp1BvGTUyo2LKs1R3XS+m/T5DEjG bcH5wu2EPSOw0CrJtpAPof6LmrW1kITOVMFGGcfTBV0NVxQYOOyIEMTAS A==; X-CSE-ConnectionGUID: sjn6GGZuTumvubp5NHMXYA== X-CSE-MsgGUID: Al8enClARY+Osn5H6ky2yA== X-IronPort-AV: E=McAfee;i="6800,10657,11783"; a="83322143" X-IronPort-AV: E=Sophos;i="6.23,230,1770624000"; d="scan'208";a="83322143" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 May 2026 18:40:25 -0700 X-CSE-ConnectionGUID: sWBP+CLSTMmuL5tyuG85sg== X-CSE-MsgGUID: HflqhE7WTFKmn52ugnfjLg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,230,1770624000"; d="scan'208";a="234572785" Received: from chang-linux-3.sc.intel.com (HELO chang-linux-3) ([172.25.66.106]) by fmviesa007.fm.intel.com with ESMTP; 11 May 2026 18:40:25 -0700 From: "Chang S. Bae" To: pbonzini@redhat.com, seanjc@google.com Cc: kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, chao.gao@intel.com, chang.seok.bae@intel.com Subject: [PATCH v4 09/21] KVM: VMX: Refactor instruction information decoding Date: Tue, 12 May 2026 01:14:50 +0000 Message-ID: <20260512011502.53072-10-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260512011502.53072-1-chang.seok.bae@intel.com> References: <20260512011502.53072-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit KVM currently decodes the VMX instruction information field using a mix of open-coded bit manipulations and ad hoc helpers. Convert all decoding to use helpers to centralizes the decoding logic for the transition to a wider instruction information. No functional change intended. Signed-off-by: Chang S. Bae --- arch/x86/kvm/vmx/nested.c | 58 +++++++++++++++++++-------------------- arch/x86/kvm/vmx/vmx.c | 11 ++++---- arch/x86/kvm/vmx/vmx.h | 48 +++++++++++++++++++++++++++++--- 3 files changed, 78 insertions(+), 39 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 06c1d83a8082..bf2fe6a034aa 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -5229,7 +5229,7 @@ static void nested_vmx_triple_fault(struct kvm_vcpu *vcpu) * #UD, #GP, or #SS. */ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification, - u64 vmx_instruction_info, bool wr, int len, gva_t *ret) + u64 instr_info, bool wr, int len, gva_t *ret) { gva_t off; bool exn; @@ -5237,20 +5237,20 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification, /* * According to Vol. 3B, "Information for VM Exits Due to Instruction - * Execution", on an exit, vmx_instruction_info holds most of the - * addressing components of the operand. Only the displacement part - * is put in exit_qualification (see 3B, "Basic VM-Exit Information"). + * Execution", on an exit, instr_info holds most of the addressing + * components of the operand. Only the displacement part is put in + * exit_qualification (see 3B, "Basic VM-Exit Information"). * For how an actual address is calculated from all these components, * refer to Vol. 1, "Operand Addressing". */ - int scaling = vmx_instruction_info & 3; - int addr_size = (vmx_instruction_info >> 7) & 7; - bool is_reg = vmx_instruction_info & (1u << 10); - int seg_reg = (vmx_instruction_info >> 15) & 7; - int index_reg = (vmx_instruction_info >> 18) & 0xf; - bool index_is_valid = !(vmx_instruction_info & (1u << 22)); - int base_reg = (vmx_instruction_info >> 23) & 0xf; - bool base_is_valid = !(vmx_instruction_info & (1u << 27)); + int scaling = vmx_get_instr_info_scaling(instr_info); + int addr_size = vmx_get_instr_info_addr_size(instr_info); + bool is_reg = vmx_get_instr_info_is_reg(instr_info); + int seg_reg = vmx_get_instr_info_seg_reg(instr_info); + int index_reg = vmx_get_instr_info_index_reg(instr_info); + bool index_is_valid = vmx_get_instr_info_index_is_valid(instr_info); + int base_reg = vmx_get_instr_info_base_reg(instr_info); + bool base_is_valid = vmx_get_instr_info_base_is_valid(instr_info); if (is_reg) { kvm_queue_exception(vcpu, UD_VECTOR); @@ -5659,7 +5659,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu) return 1; /* Decode instruction info and find the field to read */ - field = kvm_register_read(vcpu, (((instr_info) >> 28) & 0xf)); + field = kvm_register_read(vcpu, vmx_get_instr_info_reg(instr_info)); if (!nested_vmx_is_evmptr12_valid(vmx)) { /* @@ -5707,8 +5707,8 @@ static int handle_vmread(struct kvm_vcpu *vcpu) * Note that the number of bits actually copied is 32 or 64 depending * on the guest's mode (32 or 64 bit), not on the given field's length. */ - if (instr_info & BIT(10)) { - kvm_register_write(vcpu, (((instr_info) >> 3) & 0xf), value); + if (vmx_get_instr_info_is_reg(instr_info)) { + kvm_register_write(vcpu, vmx_get_instr_info_reg(instr_info), value); } else { len = is_64_bit_mode(vcpu) ? 8 : 4; if (get_vmx_mem_address(vcpu, exit_qualification, @@ -5781,8 +5781,8 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) get_vmcs12(vcpu)->vmcs_link_pointer == INVALID_GPA)) return nested_vmx_failInvalid(vcpu); - if (instr_info & BIT(10)) - value = kvm_register_read(vcpu, (((instr_info) >> 3) & 0xf)); + if (vmx_get_instr_info_is_reg(instr_info)) + value = kvm_register_read(vcpu, vmx_get_instr_info_reg(instr_info)); else { len = is_64_bit_mode(vcpu) ? 8 : 4; if (get_vmx_mem_address(vcpu, exit_qualification, @@ -5793,7 +5793,7 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) return kvm_handle_memory_failure(vcpu, r, &e); } - field = kvm_register_read(vcpu, (((instr_info) >> 28) & 0xf)); + field = kvm_register_read(vcpu, vmx_get_instr_info_reg2(instr_info)); offset = get_vmcs12_field_offset(field); if (offset < 0) @@ -5969,8 +5969,8 @@ static int handle_vmptrst(struct kvm_vcpu *vcpu) static int handle_invept(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); - u64 vmx_instruction_info, types; unsigned long type, roots_to_free; + u64 instr_info, types; struct kvm_mmu *mmu; gva_t gva; struct x86_exception e; @@ -5989,8 +5989,8 @@ static int handle_invept(struct kvm_vcpu *vcpu) if (!nested_vmx_check_permission(vcpu)) return 1; - vmx_instruction_info = vmx_get_instr_info(); - gpr_index = vmx_get_instr_info_reg2(vmx_instruction_info); + instr_info = vmx_get_instr_info(); + gpr_index = vmx_get_instr_info_reg2(instr_info); type = kvm_register_read(vcpu, gpr_index); types = (vmx->nested.msrs.ept_caps >> VMX_EPT_EXTENT_SHIFT) & 6; @@ -6002,7 +6002,7 @@ static int handle_invept(struct kvm_vcpu *vcpu) * operand is read even if it isn't needed (e.g., for type==global) */ if (get_vmx_mem_address(vcpu, vmx_get_exit_qual(vcpu), - vmx_instruction_info, false, sizeof(operand), &gva)) + instr_info, false, sizeof(operand), &gva)) return 1; r = kvm_read_guest_virt(vcpu, gva, &operand, sizeof(operand), &e); if (r != X86EMUL_CONTINUE) @@ -6049,8 +6049,8 @@ static int handle_invept(struct kvm_vcpu *vcpu) static int handle_invvpid(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); - u64 vmx_instruction_info; unsigned long type, types; + u64 instr_info; gva_t gva; struct x86_exception e; struct { @@ -6070,8 +6070,8 @@ static int handle_invvpid(struct kvm_vcpu *vcpu) if (!nested_vmx_check_permission(vcpu)) return 1; - vmx_instruction_info = vmx_get_instr_info(); - gpr_index = vmx_get_instr_info_reg2(vmx_instruction_info); + instr_info = vmx_get_instr_info(); + gpr_index = vmx_get_instr_info_reg2(instr_info); type = kvm_register_read(vcpu, gpr_index); types = (vmx->nested.msrs.vpid_caps & @@ -6085,7 +6085,7 @@ static int handle_invvpid(struct kvm_vcpu *vcpu) * operand is read even if it isn't needed (e.g., for type==global) */ if (get_vmx_mem_address(vcpu, vmx_get_exit_qual(vcpu), - vmx_instruction_info, false, sizeof(operand), &gva)) + instr_info, false, sizeof(operand), &gva)) return 1; r = kvm_read_guest_virt(vcpu, gva, &operand, sizeof(operand), &e); if (r != X86EMUL_CONTINUE) @@ -6423,16 +6423,16 @@ static bool nested_vmx_exit_handled_encls(struct kvm_vcpu *vcpu, static bool nested_vmx_exit_handled_vmcs_access(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, gpa_t bitmap) { - u64 vmx_instruction_info; unsigned long field; + u64 instr_info; u8 b; if (!nested_cpu_has_shadow_vmcs(vmcs12)) return true; /* Decode instruction info and find the field to access */ - vmx_instruction_info = vmx_get_instr_info(); - field = kvm_register_read(vcpu, (((vmx_instruction_info) >> 28) & 0xf)); + instr_info = vmx_get_instr_info(); + field = kvm_register_read(vcpu, vmx_get_instr_info_reg2(instr_info)); /* Out-of-range fields always cause a VM exit from L2 to L1 */ if (field >> 15) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6bf3b79c69f3..10724b7fd405 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6136,8 +6136,8 @@ static int handle_monitor_trap(struct kvm_vcpu *vcpu) static int handle_invpcid(struct kvm_vcpu *vcpu) { - u64 vmx_instruction_info; unsigned long type; + u64 instr_info; gva_t gva; struct { u64 pcid; @@ -6150,16 +6150,15 @@ static int handle_invpcid(struct kvm_vcpu *vcpu) return 1; } - vmx_instruction_info = vmx_get_instr_info(); - gpr_index = vmx_get_instr_info_reg2(vmx_instruction_info); + instr_info = vmx_get_instr_info(); + gpr_index = vmx_get_instr_info_reg2(instr_info); type = kvm_register_read(vcpu, gpr_index); /* According to the Intel instruction reference, the memory operand * is read even if it isn't needed (e.g., for type==all) */ if (get_vmx_mem_address(vcpu, vmx_get_exit_qual(vcpu), - vmx_instruction_info, false, - sizeof(operand), &gva)) + instr_info, false, sizeof(operand), &gva)) return 1; return kvm_handle_invpcid(vcpu, type, gva); @@ -6301,7 +6300,7 @@ static int handle_notify(struct kvm_vcpu *vcpu) static int vmx_get_msr_imm_reg(struct kvm_vcpu *vcpu) { - return vmx_get_instr_info_reg(vmcs_read32(VMX_INSTRUCTION_INFO)); + return vmx_get_instr_info_reg(vmx_get_instr_info()); } static int handle_rdmsr_imm(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index aa4190620e82..345b10d28231 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -708,14 +708,54 @@ static inline u64 vmx_get_instr_info(void) return vmcs_read32(VMX_INSTRUCTION_INFO); } -static inline int vmx_get_instr_info_reg(u64 vmx_instr_info) +static inline int vmx_get_instr_info_reg(u64 instr_info) { - return (vmx_instr_info >> 3) & 0xf; + return (instr_info >> 3) & 0xf; } -static inline int vmx_get_instr_info_reg2(u64 vmx_instr_info) +static inline int vmx_get_instr_info_reg2(u64 instr_info) { - return (vmx_instr_info >> 28) & 0xf; + return (instr_info >> 28) & 0xf; +} + +static inline int vmx_get_instr_info_scaling(u64 instr_info) +{ + return instr_info & 3; +} + +static inline int vmx_get_instr_info_addr_size(u64 instr_info) +{ + return (instr_info >> 7) & 7; +} + +static inline bool vmx_get_instr_info_is_reg(u64 instr_info) +{ + return !!(instr_info & BIT(10)); +} + +static inline int vmx_get_instr_info_seg_reg(u64 instr_info) +{ + return (instr_info >> 15) & 7; +} + +static inline int vmx_get_instr_info_index_reg(u64 instr_info) +{ + return (instr_info >> 18) & 0xf; +} + +static inline bool vmx_get_instr_info_index_is_valid(u64 instr_info) +{ + return !(instr_info & BIT(22)); +} + +static inline int vmx_get_instr_info_base_reg(u64 instr_info) +{ + return (instr_info >> 23) & 0xf; +} + +static inline bool vmx_get_instr_info_base_is_valid(u64 instr_info) +{ + return !(instr_info & BIT(27)); } static inline bool vmx_can_use_ipiv(struct kvm_vcpu *vcpu) -- 2.51.0