From: Paolo Bonzini <pbonzini@redhat.com>
To: qemu-devel@nongnu.org
Cc: richard.henderson@linaro.org, mcb30@ipxe.org, qemu-stable@nongnu.org
Subject: [PATCH 2/5] target/i386: check validity of VMCB addresses
Date: Fri, 22 Dec 2023 18:59:48 +0100 [thread overview]
Message-ID: <20231222175951.172669-3-pbonzini@redhat.com> (raw)
In-Reply-To: <20231222175951.172669-1-pbonzini@redhat.com>
MSR_VM_HSAVE_PA bits 0-11 are reserved, as are the bits above the
maximum physical address width of the processor. Setting them to
1 causes a #GP (see "15.30.4 VM_HSAVE_PA MSR" in the AMD manual).
The same is true of VMCB addresses passed to VMRUN/VMLOAD/VMSAVE,
even though the manual is not clear on that.
Cc: qemu-stable@nongnu.org
Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
target/i386/tcg/sysemu/misc_helper.c | 3 +++
target/i386/tcg/sysemu/svm_helper.c | 27 +++++++++++++++++++++------
2 files changed, 24 insertions(+), 6 deletions(-)
diff --git a/target/i386/tcg/sysemu/misc_helper.c b/target/i386/tcg/sysemu/misc_helper.c
index e1528b7f80b..1901712ecef 100644
--- a/target/i386/tcg/sysemu/misc_helper.c
+++ b/target/i386/tcg/sysemu/misc_helper.c
@@ -201,6 +201,9 @@ void helper_wrmsr(CPUX86State *env)
tlb_flush(cs);
break;
case MSR_VM_HSAVE_PA:
+ if (val & (0xfff | ((~0ULL) << env_archcpu(env)->phys_bits))) {
+ goto error;
+ }
env->vm_hsave = val;
break;
#ifdef TARGET_X86_64
diff --git a/target/i386/tcg/sysemu/svm_helper.c b/target/i386/tcg/sysemu/svm_helper.c
index 32ff0dbb13c..5d6de2294fa 100644
--- a/target/i386/tcg/sysemu/svm_helper.c
+++ b/target/i386/tcg/sysemu/svm_helper.c
@@ -164,14 +164,19 @@ void helper_vmrun(CPUX86State *env, int aflag, int next_eip_addend)
uint64_t new_cr3;
uint64_t new_cr4;
- cpu_svm_check_intercept_param(env, SVM_EXIT_VMRUN, 0, GETPC());
-
if (aflag == 2) {
addr = env->regs[R_EAX];
} else {
addr = (uint32_t)env->regs[R_EAX];
}
+ /* Exceptions are checked before the intercept. */
+ if (addr & (0xfff | ((~0ULL) << env_archcpu(env)->phys_bits))) {
+ raise_exception_err_ra(env, EXCP0D_GPF, 0, GETPC());
+ }
+
+ cpu_svm_check_intercept_param(env, SVM_EXIT_VMRUN, 0, GETPC());
+
qemu_log_mask(CPU_LOG_TB_IN_ASM, "vmrun! " TARGET_FMT_lx "\n", addr);
env->vm_vmcb = addr;
@@ -463,14 +468,19 @@ void helper_vmload(CPUX86State *env, int aflag)
int mmu_idx = MMU_PHYS_IDX;
target_ulong addr;
- cpu_svm_check_intercept_param(env, SVM_EXIT_VMLOAD, 0, GETPC());
-
if (aflag == 2) {
addr = env->regs[R_EAX];
} else {
addr = (uint32_t)env->regs[R_EAX];
}
+ /* Exceptions are checked before the intercept. */
+ if (addr & (0xfff | ((~0ULL) << env_archcpu(env)->phys_bits))) {
+ raise_exception_err_ra(env, EXCP0D_GPF, 0, GETPC());
+ }
+
+ cpu_svm_check_intercept_param(env, SVM_EXIT_VMLOAD, 0, GETPC());
+
if (virtual_vm_load_save_enabled(env, SVM_EXIT_VMLOAD, GETPC())) {
mmu_idx = MMU_NESTED_IDX;
}
@@ -519,14 +529,19 @@ void helper_vmsave(CPUX86State *env, int aflag)
int mmu_idx = MMU_PHYS_IDX;
target_ulong addr;
- cpu_svm_check_intercept_param(env, SVM_EXIT_VMSAVE, 0, GETPC());
-
if (aflag == 2) {
addr = env->regs[R_EAX];
} else {
addr = (uint32_t)env->regs[R_EAX];
}
+ /* Exceptions are checked before the intercept. */
+ if (addr & (0xfff | ((~0ULL) << env_archcpu(env)->phys_bits))) {
+ raise_exception_err_ra(env, EXCP0D_GPF, 0, GETPC());
+ }
+
+ cpu_svm_check_intercept_param(env, SVM_EXIT_VMSAVE, 0, GETPC());
+
if (virtual_vm_load_save_enabled(env, SVM_EXIT_VMSAVE, GETPC())) {
mmu_idx = MMU_NESTED_IDX;
}
--
2.43.0
next prev parent reply other threads:[~2023-12-22 18:00 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-22 17:59 [PATCH 0/5] target/i386: Fix physical address masking bugs Paolo Bonzini
2023-12-22 17:59 ` [PATCH 1/5] target/i386: mask high bits of CR3 in 32-bit mode Paolo Bonzini
2023-12-25 20:33 ` Richard Henderson
2024-01-18 8:04 ` Michael Tokarev
2024-01-23 11:11 ` Paolo Bonzini
2023-12-22 17:59 ` Paolo Bonzini [this message]
2023-12-22 17:59 ` [PATCH 3/5] target/i386: Fix physical address truncation Paolo Bonzini
2023-12-23 10:34 ` Michael Brown
2023-12-23 11:47 ` Paolo Bonzini
2023-12-28 16:00 ` Michael Brown
2023-12-22 17:59 ` [PATCH 4/5] target/i386: remove unnecessary/wrong application of the A20 mask Paolo Bonzini
2023-12-22 17:59 ` [PATCH 5/5] target/i386: leave the A20 bit set in the final NPT walk Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231222175951.172669-3-pbonzini@redhat.com \
--to=pbonzini@redhat.com \
--cc=mcb30@ipxe.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-stable@nongnu.org \
--cc=richard.henderson@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).