public inbox for virtualization@lists.linux-foundation.org
 help / color / mirror / Atom feed
* [PATCH RESEND 2/2] x86/paravirt: Use XOR r32,r32 to clear register in pv_vcpu_is_preempted()
  2026-01-05  9:39 [PATCH RESEND 1/2] x86/paravirt: Remove trailing semicolons from alternative asm templates Uros Bizjak
@ 2026-01-05  9:39 ` Uros Bizjak
  2026-01-07  9:54   ` H. Peter Anvin
  2026-01-07 20:24   ` Alexey Makhalov
  0 siblings, 2 replies; 5+ messages in thread
From: Uros Bizjak @ 2026-01-05  9:39 UTC (permalink / raw)
  To: bcm-kernel-feedback-list, virtualization, x86, linux-kernel
  Cc: Uros Bizjak, Juergen Gross, Ajay Kaher, Alexey Makhalov,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	H. Peter Anvin

x86_64 zero extends 32bit operations, so for 64bit operands,
XOR r32,r32 is functionally equal to XOR r64,r64, but avoids
a REX prefix byte when legacy registers are used.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Cc: Ajay Kaher <ajay.kaher@broadcom.com>
Cc: Alexey Makhalov <alexey.makhalov@broadcom.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/include/asm/paravirt.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 4f6ec60b4cb3..59aec695ae5f 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -577,7 +577,7 @@ static __always_inline void pv_kick(int cpu)
 static __always_inline bool pv_vcpu_is_preempted(long cpu)
 {
 	return PVOP_ALT_CALLEE1(bool, lock.vcpu_is_preempted, cpu,
-				"xor %%" _ASM_AX ", %%" _ASM_AX,
+				"xor %%eax, %%eax",
 				ALT_NOT(X86_FEATURE_VCPUPREEMPT));
 }
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH RESEND 2/2] x86/paravirt: Use XOR r32,r32 to clear register in pv_vcpu_is_preempted()
  2026-01-05  9:39 ` [PATCH RESEND 2/2] x86/paravirt: Use XOR r32,r32 to clear register in pv_vcpu_is_preempted() Uros Bizjak
@ 2026-01-07  9:54   ` H. Peter Anvin
  2026-01-07 20:24   ` Alexey Makhalov
  1 sibling, 0 replies; 5+ messages in thread
From: H. Peter Anvin @ 2026-01-07  9:54 UTC (permalink / raw)
  To: Uros Bizjak, bcm-kernel-feedback-list, virtualization, x86,
	linux-kernel
  Cc: Juergen Gross, Ajay Kaher, Alexey Makhalov, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen

On January 5, 2026 1:39:07 AM PST, Uros Bizjak <ubizjak@gmail.com> wrote:
>x86_64 zero extends 32bit operations, so for 64bit operands,
>XOR r32,r32 is functionally equal to XOR r64,r64, but avoids
>a REX prefix byte when legacy registers are used.
>
>Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
>Reviewed-by: Juergen Gross <jgross@suse.com>
>Cc: Ajay Kaher <ajay.kaher@broadcom.com>
>Cc: Alexey Makhalov <alexey.makhalov@broadcom.com>
>Cc: Thomas Gleixner <tglx@linutronix.de>
>Cc: Ingo Molnar <mingo@kernel.org>
>Cc: Borislav Petkov <bp@alien8.de>
>Cc: Dave Hansen <dave.hansen@linux.intel.com>
>Cc: "H. Peter Anvin" <hpa@zytor.com>
>---
> arch/x86/include/asm/paravirt.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
>index 4f6ec60b4cb3..59aec695ae5f 100644
>--- a/arch/x86/include/asm/paravirt.h
>+++ b/arch/x86/include/asm/paravirt.h
>@@ -577,7 +577,7 @@ static __always_inline void pv_kick(int cpu)
> static __always_inline bool pv_vcpu_is_preempted(long cpu)
> {
> 	return PVOP_ALT_CALLEE1(bool, lock.vcpu_is_preempted, cpu,
>-				"xor %%" _ASM_AX ", %%" _ASM_AX,
>+				"xor %%eax, %%eax",
> 				ALT_NOT(X86_FEATURE_VCPUPREEMPT));
> }
> 

Acked-by: H. Peter Anvin (Intel) <hpa@zytor.com>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH RESEND 2/2] x86/paravirt: Use XOR r32,r32 to clear register in pv_vcpu_is_preempted()
  2026-01-05  9:39 ` [PATCH RESEND 2/2] x86/paravirt: Use XOR r32,r32 to clear register in pv_vcpu_is_preempted() Uros Bizjak
  2026-01-07  9:54   ` H. Peter Anvin
@ 2026-01-07 20:24   ` Alexey Makhalov
  1 sibling, 0 replies; 5+ messages in thread
From: Alexey Makhalov @ 2026-01-07 20:24 UTC (permalink / raw)
  To: Uros Bizjak, bcm-kernel-feedback-list, virtualization, x86,
	linux-kernel
  Cc: Juergen Gross, Ajay Kaher, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin



On 1/5/26 1:39 AM, Uros Bizjak wrote:
> x86_64 zero extends 32bit operations, so for 64bit operands,
> XOR r32,r32 is functionally equal to XOR r64,r64, but avoids
> a REX prefix byte when legacy registers are used.
> 
> Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>
> Cc: Ajay Kaher <ajay.kaher@broadcom.com>
> Cc: Alexey Makhalov <alexey.makhalov@broadcom.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> ---
>   arch/x86/include/asm/paravirt.h | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index 4f6ec60b4cb3..59aec695ae5f 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -577,7 +577,7 @@ static __always_inline void pv_kick(int cpu)
>   static __always_inline bool pv_vcpu_is_preempted(long cpu)
>   {
>   	return PVOP_ALT_CALLEE1(bool, lock.vcpu_is_preempted, cpu,
> -				"xor %%" _ASM_AX ", %%" _ASM_AX,
> +				"xor %%eax, %%eax",
>   				ALT_NOT(X86_FEATURE_VCPUPREEMPT));
>   }
>   

Acked-by: Alexey Makhalov <alexey.makhalov@broadcom.com>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH RESEND 1/2]  x86/paravirt: Remove trailing semicolons from alternative asm templates
@ 2026-01-14 21:18 Uros Bizjak
  2026-01-14 21:18 ` [PATCH RESEND 2/2] x86/paravirt: Use XOR r32,r32 to clear register in pv_vcpu_is_preempted() Uros Bizjak
  0 siblings, 1 reply; 5+ messages in thread
From: Uros Bizjak @ 2026-01-14 21:18 UTC (permalink / raw)
  To: bcm-kernel-feedback-list, virtualization, x86, linux-kernel
  Cc: Uros Bizjak, Juergen Gross, Alexey Makhalov, Ajay Kaher,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	H. Peter Anvin

GCC inline asm treats semicolons as instruction separators, so a
semicolon after the last instruction is not required.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Acked-by: Alexey Makhalov <alexey.makhalov@broadcom.com>
Cc: Ajay Kaher <ajay.kaher@broadcom.com>
Cc: Thomas Gleixner <tglx@kernel.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/include/asm/paravirt-spinlock.h |  4 ++--
 arch/x86/include/asm/paravirt.h          | 16 ++++++++--------
 arch/x86/include/asm/paravirt_types.h    |  2 +-
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/paravirt-spinlock.h b/arch/x86/include/asm/paravirt-spinlock.h
index a5011ef3a6cc..458b888aba84 100644
--- a/arch/x86/include/asm/paravirt-spinlock.h
+++ b/arch/x86/include/asm/paravirt-spinlock.h
@@ -38,14 +38,14 @@ static __always_inline void pv_queued_spin_lock_slowpath(struct qspinlock *lock,
 static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock)
 {
 	PVOP_ALT_VCALLEE1(pv_ops_lock, queued_spin_unlock, lock,
-			  "movb $0, (%%" _ASM_ARG1 ");",
+			  "movb $0, (%%" _ASM_ARG1 ")",
 			  ALT_NOT(X86_FEATURE_PVUNLOCK));
 }
 
 static __always_inline bool pv_vcpu_is_preempted(long cpu)
 {
 	return PVOP_ALT_CALLEE1(bool, pv_ops_lock, vcpu_is_preempted, cpu,
-				"xor %%" _ASM_AX ", %%" _ASM_AX ";",
+				"xor %%" _ASM_AX ", %%" _ASM_AX,
 				ALT_NOT(X86_FEATURE_VCPUPREEMPT));
 }
 
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index b21072af731d..3d0b92a8a557 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -117,7 +117,7 @@ static inline void write_cr0(unsigned long x)
 static __always_inline unsigned long read_cr2(void)
 {
 	return PVOP_ALT_CALLEE0(unsigned long, pv_ops, mmu.read_cr2,
-				"mov %%cr2, %%rax;", ALT_NOT_XEN);
+				"mov %%cr2, %%rax", ALT_NOT_XEN);
 }
 
 static __always_inline void write_cr2(unsigned long x)
@@ -128,7 +128,7 @@ static __always_inline void write_cr2(unsigned long x)
 static inline unsigned long __read_cr3(void)
 {
 	return PVOP_ALT_CALL0(unsigned long, pv_ops, mmu.read_cr3,
-			      "mov %%cr3, %%rax;", ALT_NOT_XEN);
+			      "mov %%cr3, %%rax", ALT_NOT_XEN);
 }
 
 static inline void write_cr3(unsigned long x)
@@ -516,18 +516,18 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 static __always_inline unsigned long arch_local_save_flags(void)
 {
-	return PVOP_ALT_CALLEE0(unsigned long, pv_ops, irq.save_fl, "pushf; pop %%rax;",
+	return PVOP_ALT_CALLEE0(unsigned long, pv_ops, irq.save_fl, "pushf; pop %%rax",
 				ALT_NOT_XEN);
 }
 
 static __always_inline void arch_local_irq_disable(void)
 {
-	PVOP_ALT_VCALLEE0(pv_ops, irq.irq_disable, "cli;", ALT_NOT_XEN);
+	PVOP_ALT_VCALLEE0(pv_ops, irq.irq_disable, "cli", ALT_NOT_XEN);
 }
 
 static __always_inline void arch_local_irq_enable(void)
 {
-	PVOP_ALT_VCALLEE0(pv_ops, irq.irq_enable, "sti;", ALT_NOT_XEN);
+	PVOP_ALT_VCALLEE0(pv_ops, irq.irq_enable, "sti", ALT_NOT_XEN);
 }
 
 static __always_inline unsigned long arch_local_irq_save(void)
@@ -553,9 +553,9 @@ static __always_inline unsigned long arch_local_irq_save(void)
 	call PARA_INDIRECT(pv_ops+PV_IRQ_save_fl);
 .endm
 
-#define SAVE_FLAGS ALTERNATIVE_2 "PARA_IRQ_save_fl;",			\
-				 "ALT_CALL_INSTR;", ALT_CALL_ALWAYS,	\
-				 "pushf; pop %rax;", ALT_NOT_XEN
+#define SAVE_FLAGS ALTERNATIVE_2 "PARA_IRQ_save_fl",			\
+				 "ALT_CALL_INSTR", ALT_CALL_ALWAYS,	\
+				 "pushf; pop %rax", ALT_NOT_XEN
 #endif
 #endif /* CONFIG_PARAVIRT_XXL */
 #endif	/* CONFIG_X86_64 */
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 7ccd41628d36..9bcf6bce88f6 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -210,7 +210,7 @@ extern struct paravirt_patch_template pv_ops;
  */
 #define PARAVIRT_CALL					\
 	ANNOTATE_RETPOLINE_SAFE "\n\t"			\
-	"call *%[paravirt_opptr];"
+	"call *%[paravirt_opptr]"
 
 /*
  * These macros are intended to wrap calls through one of the paravirt
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH RESEND 2/2]  x86/paravirt: Use XOR r32,r32 to clear register in pv_vcpu_is_preempted()
  2026-01-14 21:18 [PATCH RESEND 1/2] x86/paravirt: Remove trailing semicolons from alternative asm templates Uros Bizjak
@ 2026-01-14 21:18 ` Uros Bizjak
  0 siblings, 0 replies; 5+ messages in thread
From: Uros Bizjak @ 2026-01-14 21:18 UTC (permalink / raw)
  To: bcm-kernel-feedback-list, virtualization, x86, linux-kernel
  Cc: Uros Bizjak, Juergen Gross, H. Peter Anvin, Alexey Makhalov,
	Ajay Kaher, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen

x86_64 zero extends 32bit operations, so for 64bit operands,
XOR r32,r32 is functionally equal to XOR r64,r64, but avoids
a REX prefix byte when legacy registers are used.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Acked-by: "H. Peter Anvin" <hpa@zytor.com>
Acked-by: Alexey Makhalov <alexey.makhalov@broadcom.com>
Cc: Ajay Kaher <ajay.kaher@broadcom.com>
Cc: Thomas Gleixner <tglx@kernel.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
---
 arch/x86/include/asm/paravirt-spinlock.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/paravirt-spinlock.h b/arch/x86/include/asm/paravirt-spinlock.h
index 458b888aba84..7beffcb08ed6 100644
--- a/arch/x86/include/asm/paravirt-spinlock.h
+++ b/arch/x86/include/asm/paravirt-spinlock.h
@@ -45,7 +45,7 @@ static __always_inline void pv_queued_spin_unlock(struct qspinlock *lock)
 static __always_inline bool pv_vcpu_is_preempted(long cpu)
 {
 	return PVOP_ALT_CALLEE1(bool, pv_ops_lock, vcpu_is_preempted, cpu,
-				"xor %%" _ASM_AX ", %%" _ASM_AX,
+				"xor %%eax, %%eax",
 				ALT_NOT(X86_FEATURE_VCPUPREEMPT));
 }
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-01-14 21:20 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-14 21:18 [PATCH RESEND 1/2] x86/paravirt: Remove trailing semicolons from alternative asm templates Uros Bizjak
2026-01-14 21:18 ` [PATCH RESEND 2/2] x86/paravirt: Use XOR r32,r32 to clear register in pv_vcpu_is_preempted() Uros Bizjak
  -- strict thread matches above, loose matches on Subject: below --
2026-01-05  9:39 [PATCH RESEND 1/2] x86/paravirt: Remove trailing semicolons from alternative asm templates Uros Bizjak
2026-01-05  9:39 ` [PATCH RESEND 2/2] x86/paravirt: Use XOR r32,r32 to clear register in pv_vcpu_is_preempted() Uros Bizjak
2026-01-07  9:54   ` H. Peter Anvin
2026-01-07 20:24   ` Alexey Makhalov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox