From: Paolo Bonzini <pbonzini@redhat.com>
To: qemu-devel@nongnu.org
Cc: Mohamed Mediouni <mohamed@unpredictable.fr>
Subject: [PULL 14/19] whpx: i386: fetch segments on-demand
Date: Wed, 25 Mar 2026 17:44:48 +0100 [thread overview]
Message-ID: <20260325164453.72127-15-pbonzini@redhat.com> (raw)
In-Reply-To: <20260325164453.72127-1-pbonzini@redhat.com>
From: Mohamed Mediouni <mohamed@unpredictable.fr>
Instead of save/restore, fetch segments dynamically.
Rely on the fetched state instead of loading from memory.
Or, if available, on the VM exit context.
Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
Link: https://lore.kernel.org/r/20260324151323.74473-12-mohamed@unpredictable.fr
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
target/i386/whpx/whpx-all.c | 71 +++++++++++++++++++++++++++++++------
1 file changed, 60 insertions(+), 11 deletions(-)
diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index 71b33a632ac..6f232786422 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -868,21 +868,70 @@ static int whpx_handle_portio(CPUState *cpu,
return 0;
}
+static void whpx_segment_to_x86_descriptor(CPUState *cpu, WHV_X64_SEGMENT_REGISTER* reg,
+ struct x86_segment_descriptor *desc)
+{
+ uint32_t limit;
+ desc->g = reg->Granularity;
+
+ /*
+ * Hyper-V can return reg->Granularity == 0
+ * with a higher limit than 0xfffff.
+ *
+ * Detect that case and set desc->g
+ * with shifting the limit properly.
+ */
+ if (!desc->g && reg->Limit <= 0xfffff) {
+ limit = reg->Limit;
+ } else {
+ limit = (reg->Limit >> 12);
+ desc->g = 1;
+ }
+
+ x86_set_segment_limit(desc, limit);
+ x86_set_segment_base(desc, reg->Base);
+
+ desc->type = reg->SegmentType;
+ desc->s = reg->NonSystemSegment;
+ desc->dpl = reg->DescriptorPrivilegeLevel;
+ desc->p = reg->Present;
+ desc->avl = reg->Available;
+ desc->l = reg->Long;
+ desc->db = reg->Default;
+}
+
+static void whpx_read_segment_descriptor(CPUState *cpu, WHV_X64_SEGMENT_REGISTER* reg,
+ X86Seg seg)
+{
+ AccelCPUState *vcpu = cpu->accel;
+ WHV_REGISTER_NAME reg_name = WHvX64RegisterEs + seg;
+ WHV_REGISTER_VALUE val;
+
+ if (seg == R_CS) {
+ *reg = vcpu->exit_ctx.VpContext.Cs;
+ return;
+ }
+ if (vcpu->exit_ctx.ExitReason == WHvRunVpExitReasonX64IoPortAccess) {
+ if (seg == R_DS) {
+ *reg = vcpu->exit_ctx.IoPortAccess.Ds;
+ return;
+ } else if (seg == R_ES) {
+ *reg = vcpu->exit_ctx.IoPortAccess.Es;
+ return;
+ }
+ }
+
+ whpx_get_reg(cpu, reg_name, &val);
+ *reg = val.Segment;
+}
+
static void read_segment_descriptor(CPUState *cpu,
struct x86_segment_descriptor *desc,
enum X86Seg seg_idx)
{
- bool ret;
- X86CPU *x86_cpu = X86_CPU(cpu);
- CPUX86State *env = &x86_cpu->env;
- SegmentCache *seg = &env->segs[seg_idx];
- x86_segment_selector sel = { .sel = seg->selector & 0xFFFF };
-
- ret = x86_read_segment_descriptor(cpu, desc, sel);
- if (ret == false) {
- error_report("failed to read segment descriptor");
- abort();
- }
+ WHV_X64_SEGMENT_REGISTER reg;
+ whpx_read_segment_descriptor(cpu, ®, seg_idx);
+ whpx_segment_to_x86_descriptor(cpu, ®, desc);
}
static bool is_protected_mode(CPUState *cpu)
--
2.53.0
next prev parent reply other threads:[~2026-03-25 16:47 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-25 16:44 [PULL 00/19] Fixes (including big i386/emulate performance improvement) for 11.0-rc Paolo Bonzini
2026-03-25 16:44 ` [PULL 01/19] tests/functional: preserve PYTHONPATH entries Paolo Bonzini
2026-03-25 16:44 ` [PULL 02/19] tdx: fix use-after-free in tdx_fetch_cpuid Paolo Bonzini
2026-03-25 16:44 ` [PULL 03/19] treewide: replace qemu_hw_version() with QEMU_HW_VERSION Paolo Bonzini
2026-03-25 16:44 ` [PULL 04/19] whpx: i386: workaround for Windows 10 support Paolo Bonzini
2026-03-25 16:44 ` [PULL 05/19] whpx: i386: enable exceptions VM exit only when needed Paolo Bonzini
2026-03-25 16:44 ` [PULL 06/19] whpx: i386: skip TSC read for MMIO exits Paolo Bonzini
2026-03-25 16:44 ` [PULL 07/19] whpx: i386: skip XCRs " Paolo Bonzini
2026-03-25 16:44 ` [PULL 08/19] whpx: i386: don't restore segment registers after MMIO handling Paolo Bonzini
2026-03-25 16:44 ` [PULL 09/19] target/i386: emulate: add new callbacks Paolo Bonzini
2026-03-25 16:44 ` [PULL 10/19] whpx: i386: add implementation of new x86_emul_ops Paolo Bonzini
2026-03-25 16:44 ` [PULL 11/19] target/i386: emulate: indirect access to CRs Paolo Bonzini
2026-03-25 16:44 ` [PULL 12/19] whpx: i386: " Paolo Bonzini
2026-03-25 16:44 ` [PULL 13/19] target/i386: emulate: segmentation rework Paolo Bonzini
2026-03-25 16:44 ` Paolo Bonzini [this message]
2026-03-25 16:44 ` [PULL 15/19] whpx: i386: fast runtime state reads Paolo Bonzini
2026-03-25 16:44 ` [PULL 16/19] hw/audio/sb16: validate VMState fields in post_load Paolo Bonzini
2026-03-25 16:44 ` [PULL 17/19] target/i386: expose AMD GMET feature Paolo Bonzini
2026-03-25 16:44 ` [PULL 18/19] target/i386: emulate: set PG_ERROR_W_MASK as expected Paolo Bonzini
2026-03-25 16:44 ` [PULL 19/19] target/i386: emulate: follow priv_check_exempt Paolo Bonzini
2026-03-25 16:53 ` [PULL 00/19] Fixes (including big i386/emulate performance improvement) for 11.0-rc Peter Maydell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260325164453.72127-15-pbonzini@redhat.com \
--to=pbonzini@redhat.com \
--cc=mohamed@unpredictable.fr \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox