qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] i386/tcg: Protect SMM against malicious kernel via IPI & DR
@ 2025-09-25 10:30 YiFei Zhu
  2025-09-25 10:30 ` [PATCH 1/2] i386/cpu: Prevent delivering SIPI during SMM in TCG mode YiFei Zhu
  2025-09-25 10:30 ` [PATCH 2/2] i386/tcg/smm_helper: Properly apply DR values on SMM entry / exit YiFei Zhu
  0 siblings, 2 replies; 6+ messages in thread
From: YiFei Zhu @ 2025-09-25 10:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Zhao Liu, Richard Henderson, Eduardo Habkost,
	qemu-stable, unvariant.winter, YiFei Zhu, YiFei Zhu

These two patches are fixing two separate TCG-only SMM vulnerabilities.
Neither of them are reproducible with KVM, and hence are limited to
"Non-virtualization Use Case" [1].

The first patch's bug is found by myself, while developing SMM challenges
for CrewCTF. The second patch's bug is found by unvariant, a participant
of the said CTF.

[1] https://www.qemu.org/docs/master/system/security.html#non-virtualization-use-case

YiFei Zhu (2):
  i386/cpu: Prevent delivering SIPI during SMM in TCG mode
  i386/tcg/smm_helper: Properly apply DR values on SMM entry / exit

 target/i386/cpu.c                   |  3 ++-
 target/i386/tcg/system/smm_helper.c | 10 +++++-----
 2 files changed, 7 insertions(+), 6 deletions(-)

-- 
2.51.0.536.g15c5d4f767-goog



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] i386/cpu: Prevent delivering SIPI during SMM in TCG mode
  2025-09-25 10:30 [PATCH 0/2] i386/tcg: Protect SMM against malicious kernel via IPI & DR YiFei Zhu
@ 2025-09-25 10:30 ` YiFei Zhu
  2025-10-11  7:19   ` Paolo Bonzini
  2025-09-25 10:30 ` [PATCH 2/2] i386/tcg/smm_helper: Properly apply DR values on SMM entry / exit YiFei Zhu
  1 sibling, 1 reply; 6+ messages in thread
From: YiFei Zhu @ 2025-09-25 10:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Zhao Liu, Richard Henderson, Eduardo Habkost,
	qemu-stable, unvariant.winter, YiFei Zhu, YiFei Zhu

A malicious kernel may control the instruction pointer in SMM in a
multi-processor VM by sending a sequence of IPIs via APIC:

CPU0			CPU1
IPI(CPU1, MODE_INIT)
			x86_cpu_exec_reset()
			apic_init_reset()
			s->wait_for_sipi = true
IPI(CPU1, MODE_SMI)
			do_smm_enter()
			env->hflags |= HF_SMM_MASK;
IPI(CPU1, MODE_STARTUP, vector)
			do_cpu_sipi()
			apic_sipi()
			/* s->wait_for_sipi check passes */
			cpu_x86_load_seg_cache_sipi(vector)

A different sequence, SMI INIT SIPI, is also buggy in TCG because
INIT is not blocked or latched during SMM. However, it is not
vulnerable to an instruction pointer control in the same way because
x86_cpu_exec_reset clears env->hflags, exiting SMM.

Fixes: a9bad65d2c1f ("target-i386: wake up processors that receive an SMI")
Signed-off-by: YiFei Zhu <zhuyifei@google.com>
---
 target/i386/cpu.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 6d85149e6e..697cc4e63b 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -9762,7 +9762,8 @@ int x86_cpu_pending_interrupt(CPUState *cs, int interrupt_request)
     if (interrupt_request & CPU_INTERRUPT_POLL) {
         return CPU_INTERRUPT_POLL;
     }
-    if (interrupt_request & CPU_INTERRUPT_SIPI) {
+    if ((interrupt_request & CPU_INTERRUPT_SIPI) &&
+        !(env->hflags & HF_SMM_MASK)) {
         return CPU_INTERRUPT_SIPI;
     }
 
-- 
2.51.0.536.g15c5d4f767-goog



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] i386/tcg/smm_helper: Properly apply DR values on SMM entry / exit
  2025-09-25 10:30 [PATCH 0/2] i386/tcg: Protect SMM against malicious kernel via IPI & DR YiFei Zhu
  2025-09-25 10:30 ` [PATCH 1/2] i386/cpu: Prevent delivering SIPI during SMM in TCG mode YiFei Zhu
@ 2025-09-25 10:30 ` YiFei Zhu
  2025-10-11  7:22   ` Paolo Bonzini
  1 sibling, 1 reply; 6+ messages in thread
From: YiFei Zhu @ 2025-09-25 10:30 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, Zhao Liu, Richard Henderson, Eduardo Habkost,
	qemu-stable, unvariant.winter, YiFei Zhu, YiFei Zhu

do_smm_enter and helper_rsm sets the env->dr, but does not sync the
values with cpu_x86_update_dr7. A malicious kernel may control the
instruction pointer in SMM by setting a breakpoint on the SMI
entry point, and after do_smm_enter cpu->breakpoints contains the
stale breakpoint; and because IDT is not reloaded upon SMI entry,
the debug exception handler controlled by the malicious kernel
is invoked.

Fixes: 01df040b5247 ("x86: Debug register emulation (Jan Kiszka)")
Reported-by: unvariant.winter@gmail.com
Signed-off-by: YiFei Zhu <zhuyifei@google.com>
---
 target/i386/tcg/system/smm_helper.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/target/i386/tcg/system/smm_helper.c b/target/i386/tcg/system/smm_helper.c
index 251eb7856c..fb028a8272 100644
--- a/target/i386/tcg/system/smm_helper.c
+++ b/target/i386/tcg/system/smm_helper.c
@@ -168,7 +168,7 @@ void do_smm_enter(X86CPU *cpu)
                        env->cr[0] & ~(CR0_PE_MASK | CR0_EM_MASK | CR0_TS_MASK |
                                       CR0_PG_MASK));
     cpu_x86_update_cr4(env, 0);
-    env->dr[7] = 0x00000400;
+    helper_set_dr(env, 7, 0x00000400);
 
     cpu_x86_load_seg_cache(env, R_CS, (env->smbase >> 4) & 0xffff, env->smbase,
                            0xffffffff,
@@ -233,8 +233,8 @@ void helper_rsm(CPUX86State *env)
     env->eip = x86_ldq_phys(cs, sm_state + 0x7f78);
     cpu_load_eflags(env, x86_ldl_phys(cs, sm_state + 0x7f70),
                     ~(CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C | DF_MASK));
-    env->dr[6] = x86_ldl_phys(cs, sm_state + 0x7f68);
-    env->dr[7] = x86_ldl_phys(cs, sm_state + 0x7f60);
+    helper_set_dr(env, 6, x86_ldl_phys(cs, sm_state + 0x7f68));
+    helper_set_dr(env, 7, x86_ldl_phys(cs, sm_state + 0x7f60));
 
     cpu_x86_update_cr4(env, x86_ldl_phys(cs, sm_state + 0x7f48));
     cpu_x86_update_cr3(env, x86_ldq_phys(cs, sm_state + 0x7f50));
@@ -268,8 +268,8 @@ void helper_rsm(CPUX86State *env)
     env->regs[R_EDX] = x86_ldl_phys(cs, sm_state + 0x7fd8);
     env->regs[R_ECX] = x86_ldl_phys(cs, sm_state + 0x7fd4);
     env->regs[R_EAX] = x86_ldl_phys(cs, sm_state + 0x7fd0);
-    env->dr[6] = x86_ldl_phys(cs, sm_state + 0x7fcc);
-    env->dr[7] = x86_ldl_phys(cs, sm_state + 0x7fc8);
+    helper_set_dr(env, 6, x86_ldl_phys(cs, sm_state + 0x7fcc));
+    helper_set_dr(env, 7, x86_ldl_phys(cs, sm_state + 0x7fc8));
 
     env->tr.selector = x86_ldl_phys(cs, sm_state + 0x7fc4) & 0xffff;
     env->tr.base = x86_ldl_phys(cs, sm_state + 0x7f64);
-- 
2.51.0.536.g15c5d4f767-goog



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] i386/cpu: Prevent delivering SIPI during SMM in TCG mode
  2025-09-25 10:30 ` [PATCH 1/2] i386/cpu: Prevent delivering SIPI during SMM in TCG mode YiFei Zhu
@ 2025-10-11  7:19   ` Paolo Bonzini
  2025-10-11  7:48     ` YiFei Zhu
  0 siblings, 1 reply; 6+ messages in thread
From: Paolo Bonzini @ 2025-10-11  7:19 UTC (permalink / raw)
  To: YiFei Zhu, qemu-devel
  Cc: Zhao Liu, Richard Henderson, Eduardo Habkost, qemu-stable,
	unvariant.winter, YiFei Zhu

On 9/25/25 12:30, YiFei Zhu wrote:
> A malicious kernel may control the instruction pointer in SMM in a
> multi-processor VM by sending a sequence of IPIs via APIC:
> 
> CPU0			CPU1
> IPI(CPU1, MODE_INIT)
> 			x86_cpu_exec_reset()
> 			apic_init_reset()
> 			s->wait_for_sipi = true
> IPI(CPU1, MODE_SMI)
> 			do_smm_enter()
> 			env->hflags |= HF_SMM_MASK;
> IPI(CPU1, MODE_STARTUP, vector)
> 			do_cpu_sipi()
> 			apic_sipi()
> 			/* s->wait_for_sipi check passes */
> 			cpu_x86_load_seg_cache_sipi(vector)
> 
> A different sequence, SMI INIT SIPI, is also buggy in TCG because
> INIT is not blocked or latched during SMM. However, it is not
> vulnerable to an instruction pointer control in the same way because
> x86_cpu_exec_reset clears env->hflags, exiting SMM.

Thanks for the reports!  For this bug, I prefer to have the CPU eat the 
SIPI instead of latching them:

diff --git a/hw/intc/apic.c b/hw/intc/apic.c
index 6d7859640c2..c7680338563 100644
--- a/hw/intc/apic.c
+++ b/hw/intc/apic.c
@@ -646,8 +646,6 @@ void apic_sipi(DeviceState *dev)
  {
      APICCommonState *s = APIC(dev);

-    cpu_reset_interrupt(CPU(s->cpu), CPU_INTERRUPT_SIPI);
-
      if (!s->wait_for_sipi)
          return;
      cpu_x86_load_seg_cache_sipi(s->cpu, s->sipi_vector);
diff --git a/target/i386/helper.c b/target/i386/helper.c
index 651041ccfa6..a96834c4457 100644
--- a/target/i386/helper.c
+++ b/target/i386/helper.c
@@ -621,6 +621,9 @@ void do_cpu_init(X86CPU *cpu)

  void do_cpu_sipi(X86CPU *cpu)
  {
+    if (env->hflags & HF_SMM_MASK) {
+        return;
+    }
      apic_sipi(cpu->apic_state);
  }

diff --git a/target/i386/tcg/system/seg_helper.c 
b/target/i386/tcg/system/seg_helper.c
index 38072e51d72..8c7856be81e 100644
--- a/target/i386/tcg/system/seg_helper.c
+++ b/target/i386/tcg/system/seg_helper.c
@@ -182,6 +182,7 @@ bool x86_cpu_exec_interrupt(
          apic_poll_irq(cpu->apic_state);
          break;
      case CPU_INTERRUPT_SIPI:
+        cpu_reset_interrupt(cs, CPU_INTERRUPT_SIPI);
          do_cpu_sipi(cpu);
          break;
      case CPU_INTERRUPT_SMI:


Fixing INIT is harder, because it requires splitting CPU_INTERRUPT_INIT 
and CPU_INTERRUPT_RESET, but I'll take a look.

Paolo

> Fixes: a9bad65d2c1f ("target-i386: wake up processors that receive an SMI")
> Signed-off-by: YiFei Zhu <zhuyifei@google.com>
> ---
>   target/i386/cpu.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index 6d85149e6e..697cc4e63b 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -9762,7 +9762,8 @@ int x86_cpu_pending_interrupt(CPUState *cs, int interrupt_request)
>       if (interrupt_request & CPU_INTERRUPT_POLL) {
>           return CPU_INTERRUPT_POLL;
>       }
> -    if (interrupt_request & CPU_INTERRUPT_SIPI) {
> +    if ((interrupt_request & CPU_INTERRUPT_SIPI) &&
> +        !(env->hflags & HF_SMM_MASK)) {
>           return CPU_INTERRUPT_SIPI;
>       }
>   




^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] i386/tcg/smm_helper: Properly apply DR values on SMM entry / exit
  2025-09-25 10:30 ` [PATCH 2/2] i386/tcg/smm_helper: Properly apply DR values on SMM entry / exit YiFei Zhu
@ 2025-10-11  7:22   ` Paolo Bonzini
  0 siblings, 0 replies; 6+ messages in thread
From: Paolo Bonzini @ 2025-10-11  7:22 UTC (permalink / raw)
  To: YiFei Zhu, qemu-devel
  Cc: Zhao Liu, Richard Henderson, Eduardo Habkost, qemu-stable,
	unvariant.winter, YiFei Zhu

On 9/25/25 12:30, YiFei Zhu wrote:
> do_smm_enter and helper_rsm sets the env->dr, but does not sync the
> values with cpu_x86_update_dr7. A malicious kernel may control the
> instruction pointer in SMM by setting a breakpoint on the SMI
> entry point, and after do_smm_enter cpu->breakpoints contains the
> stale breakpoint; and because IDT is not reloaded upon SMI entry,
> the debug exception handler controlled by the malicious kernel
> is invoked.
> 
> Fixes: 01df040b5247 ("x86: Debug register emulation (Jan Kiszka)")
> Reported-by: unvariant.winter@gmail.com
> Signed-off-by: YiFei Zhu <zhuyifei@google.com>

Applied, thanks!

Paolo

> ---
>   target/i386/tcg/system/smm_helper.c | 10 +++++-----
>   1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/target/i386/tcg/system/smm_helper.c b/target/i386/tcg/system/smm_helper.c
> index 251eb7856c..fb028a8272 100644
> --- a/target/i386/tcg/system/smm_helper.c
> +++ b/target/i386/tcg/system/smm_helper.c
> @@ -168,7 +168,7 @@ void do_smm_enter(X86CPU *cpu)
>                          env->cr[0] & ~(CR0_PE_MASK | CR0_EM_MASK | CR0_TS_MASK |
>                                         CR0_PG_MASK));
>       cpu_x86_update_cr4(env, 0);
> -    env->dr[7] = 0x00000400;
> +    helper_set_dr(env, 7, 0x00000400);
>   
>       cpu_x86_load_seg_cache(env, R_CS, (env->smbase >> 4) & 0xffff, env->smbase,
>                              0xffffffff,
> @@ -233,8 +233,8 @@ void helper_rsm(CPUX86State *env)
>       env->eip = x86_ldq_phys(cs, sm_state + 0x7f78);
>       cpu_load_eflags(env, x86_ldl_phys(cs, sm_state + 0x7f70),
>                       ~(CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C | DF_MASK));
> -    env->dr[6] = x86_ldl_phys(cs, sm_state + 0x7f68);
> -    env->dr[7] = x86_ldl_phys(cs, sm_state + 0x7f60);
> +    helper_set_dr(env, 6, x86_ldl_phys(cs, sm_state + 0x7f68));
> +    helper_set_dr(env, 7, x86_ldl_phys(cs, sm_state + 0x7f60));
>   
>       cpu_x86_update_cr4(env, x86_ldl_phys(cs, sm_state + 0x7f48));
>       cpu_x86_update_cr3(env, x86_ldq_phys(cs, sm_state + 0x7f50));
> @@ -268,8 +268,8 @@ void helper_rsm(CPUX86State *env)
>       env->regs[R_EDX] = x86_ldl_phys(cs, sm_state + 0x7fd8);
>       env->regs[R_ECX] = x86_ldl_phys(cs, sm_state + 0x7fd4);
>       env->regs[R_EAX] = x86_ldl_phys(cs, sm_state + 0x7fd0);
> -    env->dr[6] = x86_ldl_phys(cs, sm_state + 0x7fcc);
> -    env->dr[7] = x86_ldl_phys(cs, sm_state + 0x7fc8);
> +    helper_set_dr(env, 6, x86_ldl_phys(cs, sm_state + 0x7fcc));
> +    helper_set_dr(env, 7, x86_ldl_phys(cs, sm_state + 0x7fc8));
>   
>       env->tr.selector = x86_ldl_phys(cs, sm_state + 0x7fc4) & 0xffff;
>       env->tr.base = x86_ldl_phys(cs, sm_state + 0x7f64);



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] i386/cpu: Prevent delivering SIPI during SMM in TCG mode
  2025-10-11  7:19   ` Paolo Bonzini
@ 2025-10-11  7:48     ` YiFei Zhu
  0 siblings, 0 replies; 6+ messages in thread
From: YiFei Zhu @ 2025-10-11  7:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: YiFei Zhu, qemu-devel, Zhao Liu, Richard Henderson,
	Eduardo Habkost, qemu-stable, unvariant.winter

On Sat, Oct 11, 2025 at 12:19 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 9/25/25 12:30, YiFei Zhu wrote:
> > A malicious kernel may control the instruction pointer in SMM in a
> > multi-processor VM by sending a sequence of IPIs via APIC:
> >
> > CPU0                  CPU1
> > IPI(CPU1, MODE_INIT)
> >                       x86_cpu_exec_reset()
> >                       apic_init_reset()
> >                       s->wait_for_sipi = true
> > IPI(CPU1, MODE_SMI)
> >                       do_smm_enter()
> >                       env->hflags |= HF_SMM_MASK;
> > IPI(CPU1, MODE_STARTUP, vector)
> >                       do_cpu_sipi()
> >                       apic_sipi()
> >                       /* s->wait_for_sipi check passes */
> >                       cpu_x86_load_seg_cache_sipi(vector)
> >
> > A different sequence, SMI INIT SIPI, is also buggy in TCG because
> > INIT is not blocked or latched during SMM. However, it is not
> > vulnerable to an instruction pointer control in the same way because
> > x86_cpu_exec_reset clears env->hflags, exiting SMM.
>
> Thanks for the reports!  For this bug, I prefer to have the CPU eat the
> SIPI instead of latching them:

SGTM

> diff --git a/hw/intc/apic.c b/hw/intc/apic.c
> index 6d7859640c2..c7680338563 100644
> --- a/hw/intc/apic.c
> +++ b/hw/intc/apic.c
> @@ -646,8 +646,6 @@ void apic_sipi(DeviceState *dev)
>   {
>       APICCommonState *s = APIC(dev);
>
> -    cpu_reset_interrupt(CPU(s->cpu), CPU_INTERRUPT_SIPI);
> -
>       if (!s->wait_for_sipi)
>           return;
>       cpu_x86_load_seg_cache_sipi(s->cpu, s->sipi_vector);
> diff --git a/target/i386/helper.c b/target/i386/helper.c
> index 651041ccfa6..a96834c4457 100644
> --- a/target/i386/helper.c
> +++ b/target/i386/helper.c
> @@ -621,6 +621,9 @@ void do_cpu_init(X86CPU *cpu)
>
>   void do_cpu_sipi(X86CPU *cpu)
>   {
> +    if (env->hflags & HF_SMM_MASK) {
> +        return;
> +    }
>       apic_sipi(cpu->apic_state);
>   }
>
> diff --git a/target/i386/tcg/system/seg_helper.c
> b/target/i386/tcg/system/seg_helper.c
> index 38072e51d72..8c7856be81e 100644
> --- a/target/i386/tcg/system/seg_helper.c
> +++ b/target/i386/tcg/system/seg_helper.c
> @@ -182,6 +182,7 @@ bool x86_cpu_exec_interrupt(
>           apic_poll_irq(cpu->apic_state);
>           break;
>       case CPU_INTERRUPT_SIPI:
> +        cpu_reset_interrupt(cs, CPU_INTERRUPT_SIPI);
>           do_cpu_sipi(cpu);
>           break;
>       case CPU_INTERRUPT_SMI:
>
>
> Fixing INIT is harder, because it requires splitting CPU_INTERRUPT_INIT
> and CPU_INTERRUPT_RESET, but I'll take a look.

Thanks! And yeah that's the main reason why I didn't attempt to fix
INIT. It seems a lot more complex for something that seems a lot less
exploitable to me.

YiFei Zhu

> Paolo
>
> > Fixes: a9bad65d2c1f ("target-i386: wake up processors that receive an SMI")
> > Signed-off-by: YiFei Zhu <zhuyifei@google.com>
> > ---
> >   target/i386/cpu.c | 3 ++-
> >   1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> > index 6d85149e6e..697cc4e63b 100644
> > --- a/target/i386/cpu.c
> > +++ b/target/i386/cpu.c
> > @@ -9762,7 +9762,8 @@ int x86_cpu_pending_interrupt(CPUState *cs, int interrupt_request)
> >       if (interrupt_request & CPU_INTERRUPT_POLL) {
> >           return CPU_INTERRUPT_POLL;
> >       }
> > -    if (interrupt_request & CPU_INTERRUPT_SIPI) {
> > +    if ((interrupt_request & CPU_INTERRUPT_SIPI) &&
> > +        !(env->hflags & HF_SMM_MASK)) {
> >           return CPU_INTERRUPT_SIPI;
> >       }
> >
>
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-10-11 12:59 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-25 10:30 [PATCH 0/2] i386/tcg: Protect SMM against malicious kernel via IPI & DR YiFei Zhu
2025-09-25 10:30 ` [PATCH 1/2] i386/cpu: Prevent delivering SIPI during SMM in TCG mode YiFei Zhu
2025-10-11  7:19   ` Paolo Bonzini
2025-10-11  7:48     ` YiFei Zhu
2025-09-25 10:30 ` [PATCH 2/2] i386/tcg/smm_helper: Properly apply DR values on SMM entry / exit YiFei Zhu
2025-10-11  7:22   ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).