* [PATCH v4 0/7] x86: Rid .head.text of all abs references
@ 2024-12-05 11:28 Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 1/7] x86/sev: Avoid WARN()s and panic()s in early boot code Ard Biesheuvel
` (7 more replies)
0 siblings, 8 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2024-12-05 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: x86, Ard Biesheuvel, Tom Lendacky, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin
From: Ard Biesheuvel <ardb@kernel.org>
This series removes the last remaining absolute symbol references from
.head.text. Doing so is necessary because code in this section may be
called from a 1:1 mapping of memory, which deviates from the mapping
this code was linked and/or relocated to run at. This is not something
that the toolchains support: even PIC/PIE code is still assumed to
execute from the same mapping that it was relocated to run from by the
startup code or dynamic loader. This means we are basically on our own
here, and need to add measures to ensure the code works as expected in
this manner.
Given that the startup code needs to create the kernel virtual mapping
in the page tables, early references to some kernel virtual addresses
are valid even if they cannot be dereferenced yet. To avoid having to
make this distinction at build time, patches #2 and #3 replace such
valid references with RIP-relative references with an offset applied.
Patch #1 removes some absolute references from .head.text that don't
need to be there in the first place.
Changes since v3:
- add patch to disable UBSAN in .head.text C code
- rebase onto v6.13-rc1
Changes since v2:
- drop Xen changes, which have been merged in the meantime
- update patch #1 with feedback from Tom
- reorganize the .text section and emit .head.text into a separate
output section for easier diagnostics
- update the 'relocs' tool to reject absolute ELF relocations in
.head.text
Changes since v1/RFC:
- rename va_offset to p2v_offset
- take PA of _text in C code directly
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Kees Cook <keescook@chromium.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Kevin Loughlin <kevinloughlin@google.com>
Ard Biesheuvel (7):
x86/sev: Avoid WARN()s and panic()s in early boot code
x86/boot/64: Determine VA/PA offset before entering C code
x86/boot/64: Avoid intentional absolute symbol references in
.head.text
x86/boot: Disable UBSAN in early boot code
x86/kernel: Move ENTRY_TEXT to the start of the image
x86/boot: Move .head.text into its own output section
x86/boot: Reject absolute references in .head.text
arch/x86/coco/sev/core.c | 15 +++-----
arch/x86/coco/sev/shared.c | 16 +++++----
arch/x86/include/asm/init.h | 2 +-
arch/x86/include/asm/setup.h | 2 +-
arch/x86/kernel/head64.c | 38 ++++++++++++--------
arch/x86/kernel/head_64.S | 12 +++++--
arch/x86/kernel/vmlinux.lds.S | 29 ++++++++-------
arch/x86/tools/relocs.c | 8 ++++-
8 files changed, 71 insertions(+), 51 deletions(-)
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v4 1/7] x86/sev: Avoid WARN()s and panic()s in early boot code
2024-12-05 11:28 [PATCH v4 0/7] x86: Rid .head.text of all abs references Ard Biesheuvel
@ 2024-12-05 11:28 ` Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2025-01-06 15:23 ` [PATCH v4 1/7] " Tom Lendacky
2024-12-05 11:28 ` [PATCH v4 2/7] x86/boot/64: Determine VA/PA offset before entering C code Ard Biesheuvel
` (6 subsequent siblings)
7 siblings, 2 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2024-12-05 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: x86, Ard Biesheuvel, Tom Lendacky, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin
From: Ard Biesheuvel <ardb@kernel.org>
Using WARN() or panic() while executing from the early 1:1 mapping is
unlikely to do anything useful: the string literals are passed using
their kernel virtual addresses which are not even mapped yet. But even
if they were, calling into the printk() machinery from the early 1:1
mapped code is not going to get very far.
So drop the WARN()s entirely, and replace panic() with a deadloop.
Link: https://lore.kernel.org/all/6904c198-9047-14bb-858e-38b531589379@amd.com/T/#u
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/coco/sev/core.c | 15 +++++----------
arch/x86/coco/sev/shared.c | 9 +++++----
2 files changed, 10 insertions(+), 14 deletions(-)
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
index c5b0148b8c0a..499b41953e3c 100644
--- a/arch/x86/coco/sev/core.c
+++ b/arch/x86/coco/sev/core.c
@@ -777,15 +777,10 @@ early_set_pages_state(unsigned long vaddr, unsigned long paddr,
val = sev_es_rd_ghcb_msr();
- if (WARN(GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP,
- "Wrong PSC response code: 0x%x\n",
- (unsigned int)GHCB_RESP_CODE(val)))
+ if (GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP)
goto e_term;
- if (WARN(GHCB_MSR_PSC_RESP_VAL(val),
- "Failed to change page state to '%s' paddr 0x%lx error 0x%llx\n",
- op == SNP_PAGE_STATE_PRIVATE ? "private" : "shared",
- paddr, GHCB_MSR_PSC_RESP_VAL(val)))
+ if (GHCB_MSR_PSC_RESP_VAL(val))
goto e_term;
/* Page validation must be performed after changing to private */
@@ -821,7 +816,7 @@ void __head early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_PRIVATE);
}
-void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+void __head early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
unsigned long npages)
{
/*
@@ -2361,8 +2356,8 @@ static __head void svsm_setup(struct cc_blob_sev_info *cc_info)
call.rax = SVSM_CORE_CALL(SVSM_CORE_REMAP_CA);
call.rcx = pa;
ret = svsm_perform_call_protocol(&call);
- if (ret)
- panic("Can't remap the SVSM CA, ret=%d, rax_out=0x%llx\n", ret, call.rax_out);
+ while (ret)
+ cpu_relax(); /* too early to panic */
RIP_REL_REF(boot_svsm_caa) = (struct svsm_ca *)pa;
RIP_REL_REF(boot_svsm_caa_pa) = pa;
diff --git a/arch/x86/coco/sev/shared.c b/arch/x86/coco/sev/shared.c
index 71de53194089..afb7ffc355fe 100644
--- a/arch/x86/coco/sev/shared.c
+++ b/arch/x86/coco/sev/shared.c
@@ -1243,7 +1243,7 @@ static void svsm_pval_terminate(struct svsm_pvalidate_call *pc, int ret, u64 svs
__pval_terminate(pfn, action, page_size, ret, svsm_ret);
}
-static void svsm_pval_4k_page(unsigned long paddr, bool validate)
+static void __head svsm_pval_4k_page(unsigned long paddr, bool validate)
{
struct svsm_pvalidate_call *pc;
struct svsm_call call = {};
@@ -1275,12 +1275,13 @@ static void svsm_pval_4k_page(unsigned long paddr, bool validate)
ret = svsm_perform_call_protocol(&call);
if (ret)
- svsm_pval_terminate(pc, ret, call.rax_out);
+ sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
native_local_irq_restore(flags);
}
-static void pvalidate_4k_page(unsigned long vaddr, unsigned long paddr, bool validate)
+static void __head pvalidate_4k_page(unsigned long vaddr, unsigned long paddr,
+ bool validate)
{
int ret;
@@ -1293,7 +1294,7 @@ static void pvalidate_4k_page(unsigned long vaddr, unsigned long paddr, bool val
} else {
ret = pvalidate(vaddr, RMP_PG_SIZE_4K, validate);
if (ret)
- __pval_terminate(PHYS_PFN(paddr), validate, RMP_PG_SIZE_4K, ret, 0);
+ sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
}
}
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 2/7] x86/boot/64: Determine VA/PA offset before entering C code
2024-12-05 11:28 [PATCH v4 0/7] x86: Rid .head.text of all abs references Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 1/7] x86/sev: Avoid WARN()s and panic()s in early boot code Ard Biesheuvel
@ 2024-12-05 11:28 ` Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 3/7] x86/boot/64: Avoid intentional absolute symbol references in .head.text Ard Biesheuvel
` (5 subsequent siblings)
7 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2024-12-05 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: x86, Ard Biesheuvel, Tom Lendacky, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin
From: Ard Biesheuvel <ardb@kernel.org>
Implicit absolute symbol references (e.g., taking the address of a
global variable) must be avoided in the C code that runs from the early
1:1 mapping of the kernel, given that this is a practice that violates
assumptions on the part of the toolchain. I.e., RIP-relative and
absolute references are expected to produce the same values, and so the
compiler is free to choose either. However, the code currently assumes
that RIP-relative references are never emitted here.
So an explicit virtual-to-physical offset needs to be used instead to
derive the kernel virtual addresses of _text and _end, instead of simply
taking the addresses and assuming that the compiler will not choose to
use a RIP-relative references in this particular case.
Currently, phys_base is already used to perform such calculations, but
it is derived from the kernel virtual address of _text, which is taken
using an implicit absolute symbol reference. So instead, derive this
VA-to-PA offset in asm code, and pass it to the C startup code.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/include/asm/setup.h | 2 +-
arch/x86/kernel/head64.c | 8 +++++---
arch/x86/kernel/head_64.S | 12 +++++++++---
3 files changed, 15 insertions(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index 0667b2a88614..85f4fde3515c 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -49,7 +49,7 @@ extern unsigned long saved_video_mode;
extern void reserve_standard_io_resources(void);
extern void i386_reserve_resources(void);
-extern unsigned long __startup_64(unsigned long physaddr, struct boot_params *bp);
+extern unsigned long __startup_64(unsigned long p2v_offset, struct boot_params *bp);
extern void startup_64_setup_gdt_idt(void);
extern void early_setup_idt(void);
extern void __init do_early_exception(struct pt_regs *regs, int trapnr);
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 4b9d4557fc94..a7cd4053eeb3 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -138,12 +138,14 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdv
* doesn't have to generate PC-relative relocations when accessing globals from
* that function. Clang actually does not generate them, which leads to
* boot-time crashes. To work around this problem, every global pointer must
- * be accessed using RIP_REL_REF().
+ * be accessed using RIP_REL_REF(). Kernel virtual addresses can be determined
+ * by subtracting p2v_offset from the RIP-relative address.
*/
-unsigned long __head __startup_64(unsigned long physaddr,
+unsigned long __head __startup_64(unsigned long p2v_offset,
struct boot_params *bp)
{
pmd_t (*early_pgts)[PTRS_PER_PMD] = RIP_REL_REF(early_dynamic_pgts);
+ unsigned long physaddr = (unsigned long)&RIP_REL_REF(_text);
unsigned long pgtable_flags;
unsigned long load_delta;
pgdval_t *pgd;
@@ -163,7 +165,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
* Compute the delta between the address I am compiled to run at
* and the address I am actually running at.
*/
- load_delta = physaddr - (unsigned long)(_text - __START_KERNEL_map);
+ load_delta = __START_KERNEL_map + p2v_offset;
RIP_REL_REF(phys_base) = load_delta;
/* Is the address not 2M aligned? */
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 56163e2124cf..31345e0ba006 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -94,13 +94,19 @@ SYM_CODE_START_NOALIGN(startup_64)
/* Sanitize CPU configuration */
call verify_cpu
+ /*
+ * Derive the kernel's physical-to-virtual offset from the physical and
+ * virtual addresses of common_startup_64().
+ */
+ leaq common_startup_64(%rip), %rdi
+ subq .Lcommon_startup_64(%rip), %rdi
+
/*
* Perform pagetable fixups. Additionally, if SME is active, encrypt
* the kernel and retrieve the modifier (SME encryption mask if SME
* is active) to be added to the initial pgdir entry that will be
* programmed into CR3.
*/
- leaq _text(%rip), %rdi
movq %r15, %rsi
call __startup_64
@@ -128,11 +134,11 @@ SYM_CODE_START_NOALIGN(startup_64)
/* Branch to the common startup code at its kernel virtual address */
ANNOTATE_RETPOLINE_SAFE
- jmp *0f(%rip)
+ jmp *.Lcommon_startup_64(%rip)
SYM_CODE_END(startup_64)
__INITRODATA
-0: .quad common_startup_64
+SYM_DATA_LOCAL(.Lcommon_startup_64, .quad common_startup_64)
.text
SYM_CODE_START(secondary_startup_64)
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 3/7] x86/boot/64: Avoid intentional absolute symbol references in .head.text
2024-12-05 11:28 [PATCH v4 0/7] x86: Rid .head.text of all abs references Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 1/7] x86/sev: Avoid WARN()s and panic()s in early boot code Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 2/7] x86/boot/64: Determine VA/PA offset before entering C code Ard Biesheuvel
@ 2024-12-05 11:28 ` Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 4/7] x86/boot: Disable UBSAN in early boot code Ard Biesheuvel
` (4 subsequent siblings)
7 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2024-12-05 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: x86, Ard Biesheuvel, Tom Lendacky, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin
From: Ard Biesheuvel <ardb@kernel.org>
The code in .head.text executes from a 1:1 mapping and cannot generally
refer to global variables using their kernel virtual addresses. However,
there are some occurrences of such references that are valid: the kernel
virtual addresses of _text and _end are needed to populate the page
tables correctly, and some other section markers are used in a similar
way.
To avoid the need for making exceptions to the rule that .head.text must
not contain any absolute symbol references, derive these addresses from
the RIP-relative 1:1 mapped physical addresses, which can be safely
determined using RIP_REL_REF().
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/kernel/head64.c | 30 ++++++++++++--------
1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index a7cd4053eeb3..54f9a8faf212 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -91,9 +91,11 @@ static inline bool check_la57_support(void)
return true;
}
-static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdval_t *pmd)
+static unsigned long __head sme_postprocess_startup(struct boot_params *bp,
+ pmdval_t *pmd,
+ unsigned long p2v_offset)
{
- unsigned long vaddr, vaddr_end;
+ unsigned long paddr, paddr_end;
int i;
/* Encrypt the kernel and related (if SME is active) */
@@ -106,10 +108,10 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdv
* attribute.
*/
if (sme_get_me_mask()) {
- vaddr = (unsigned long)__start_bss_decrypted;
- vaddr_end = (unsigned long)__end_bss_decrypted;
+ paddr = (unsigned long)&RIP_REL_REF(__start_bss_decrypted);
+ paddr_end = (unsigned long)&RIP_REL_REF(__end_bss_decrypted);
- for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
+ for (; paddr < paddr_end; paddr += PMD_SIZE) {
/*
* On SNP, transition the page to shared in the RMP table so that
* it is consistent with the page table attribute change.
@@ -118,11 +120,11 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdv
* mapping (kernel .text). PVALIDATE, by way of
* early_snp_set_memory_shared(), requires a valid virtual
* address but the kernel is currently running off of the identity
- * mapping so use __pa() to get a *currently* valid virtual address.
+ * mapping so use the PA to get a *currently* valid virtual address.
*/
- early_snp_set_memory_shared(__pa(vaddr), __pa(vaddr), PTRS_PER_PMD);
+ early_snp_set_memory_shared(paddr, paddr, PTRS_PER_PMD);
- i = pmd_index(vaddr);
+ i = pmd_index(paddr - p2v_offset);
pmd[i] -= sme_get_me_mask();
}
}
@@ -146,6 +148,7 @@ unsigned long __head __startup_64(unsigned long p2v_offset,
{
pmd_t (*early_pgts)[PTRS_PER_PMD] = RIP_REL_REF(early_dynamic_pgts);
unsigned long physaddr = (unsigned long)&RIP_REL_REF(_text);
+ unsigned long va_text, va_end;
unsigned long pgtable_flags;
unsigned long load_delta;
pgdval_t *pgd;
@@ -172,6 +175,9 @@ unsigned long __head __startup_64(unsigned long p2v_offset,
if (load_delta & ~PMD_MASK)
for (;;);
+ va_text = physaddr - p2v_offset;
+ va_end = (unsigned long)&RIP_REL_REF(_end) - p2v_offset;
+
/* Include the SME encryption mask in the fixup value */
load_delta += sme_get_me_mask();
@@ -232,7 +238,7 @@ unsigned long __head __startup_64(unsigned long p2v_offset,
pmd_entry += sme_get_me_mask();
pmd_entry += physaddr;
- for (i = 0; i < DIV_ROUND_UP(_end - _text, PMD_SIZE); i++) {
+ for (i = 0; i < DIV_ROUND_UP(va_end - va_text, PMD_SIZE); i++) {
int idx = i + (physaddr >> PMD_SHIFT);
pmd[idx % PTRS_PER_PMD] = pmd_entry + i * PMD_SIZE;
@@ -257,11 +263,11 @@ unsigned long __head __startup_64(unsigned long p2v_offset,
pmd = &RIP_REL_REF(level2_kernel_pgt)->pmd;
/* invalidate pages before the kernel image */
- for (i = 0; i < pmd_index((unsigned long)_text); i++)
+ for (i = 0; i < pmd_index(va_text); i++)
pmd[i] &= ~_PAGE_PRESENT;
/* fixup pages that are part of the kernel image */
- for (; i <= pmd_index((unsigned long)_end); i++)
+ for (; i <= pmd_index(va_end); i++)
if (pmd[i] & _PAGE_PRESENT)
pmd[i] += load_delta;
@@ -269,7 +275,7 @@ unsigned long __head __startup_64(unsigned long p2v_offset,
for (; i < PTRS_PER_PMD; i++)
pmd[i] &= ~_PAGE_PRESENT;
- return sme_postprocess_startup(bp, pmd);
+ return sme_postprocess_startup(bp, pmd, p2v_offset);
}
/* Wipe all early page tables except for the kernel symbol map */
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 4/7] x86/boot: Disable UBSAN in early boot code
2024-12-05 11:28 [PATCH v4 0/7] x86: Rid .head.text of all abs references Ard Biesheuvel
` (2 preceding siblings ...)
2024-12-05 11:28 ` [PATCH v4 3/7] x86/boot/64: Avoid intentional absolute symbol references in .head.text Ard Biesheuvel
@ 2024-12-05 11:28 ` Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 5/7] x86/kernel: Move ENTRY_TEXT to the start of the image Ard Biesheuvel
` (3 subsequent siblings)
7 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2024-12-05 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: x86, Ard Biesheuvel, Tom Lendacky, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin
From: Ard Biesheuvel <ardb@kernel.org>
The early boot code runs from a 1:1 mapping of memory, and may execute
before the kernel virtual mapping is even up. This means absolute symbol
references cannot be permitted in this code.
UBSAN injects references to global data structures into the code, and
without -fPIC, those references are emitted as absolute references to
kernel virtual addresses. Accessing those will fault before the kernel
virtual mapping is up, so UBSAN needs to be disabled in early boot code.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/coco/sev/shared.c | 7 ++++---
arch/x86/include/asm/init.h | 2 +-
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/x86/coco/sev/shared.c b/arch/x86/coco/sev/shared.c
index afb7ffc355fe..96023bd978cc 100644
--- a/arch/x86/coco/sev/shared.c
+++ b/arch/x86/coco/sev/shared.c
@@ -498,7 +498,7 @@ static const struct snp_cpuid_table *snp_cpuid_get_table(void)
*
* Return: XSAVE area size on success, 0 otherwise.
*/
-static u32 snp_cpuid_calc_xsave_size(u64 xfeatures_en, bool compacted)
+static u32 __head snp_cpuid_calc_xsave_size(u64 xfeatures_en, bool compacted)
{
const struct snp_cpuid_table *cpuid_table = snp_cpuid_get_table();
u64 xfeatures_found = 0;
@@ -576,8 +576,9 @@ static void snp_cpuid_hv(struct ghcb *ghcb, struct es_em_ctxt *ctxt, struct cpui
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_CPUID_HV);
}
-static int snp_cpuid_postprocess(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
- struct cpuid_leaf *leaf)
+static int __head
+snp_cpuid_postprocess(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
+ struct cpuid_leaf *leaf)
{
struct cpuid_leaf leaf_hv = *leaf;
diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h
index 14d72727d7ee..0e82ebc5d1e1 100644
--- a/arch/x86/include/asm/init.h
+++ b/arch/x86/include/asm/init.h
@@ -2,7 +2,7 @@
#ifndef _ASM_X86_INIT_H
#define _ASM_X86_INIT_H
-#define __head __section(".head.text")
+#define __head __section(".head.text") __no_sanitize_undefined
struct x86_mapping_info {
void *(*alloc_pgt_page)(void *); /* allocate buf for page table */
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 5/7] x86/kernel: Move ENTRY_TEXT to the start of the image
2024-12-05 11:28 [PATCH v4 0/7] x86: Rid .head.text of all abs references Ard Biesheuvel
` (3 preceding siblings ...)
2024-12-05 11:28 ` [PATCH v4 4/7] x86/boot: Disable UBSAN in early boot code Ard Biesheuvel
@ 2024-12-05 11:28 ` Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 6/7] x86/boot: Move .head.text into its own output section Ard Biesheuvel
` (2 subsequent siblings)
7 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2024-12-05 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: x86, Ard Biesheuvel, Tom Lendacky, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin
From: Ard Biesheuvel <ardb@kernel.org>
Since commit
7734a0f31e99 ("x86/boot: Robustify calling startup_{32,64}() from the decompressor code")
it is no longer necessary for .head.text to appear at the start of the
image. Since ENTRY_TEXT needs to appear PMD-aligned, it is easier to
just place it at the start of the image, rather than line it up with the
end of the .text section. The amount of padding required should be the
same, but this arrangement also permits .head.text to be split off and
emitted separately, which is needed by a subsequent change.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/kernel/vmlinux.lds.S | 26 ++++++++++----------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index fab3ac9a4574..1ce7889cd12b 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -121,19 +121,6 @@ SECTIONS
.text : AT(ADDR(.text) - LOAD_OFFSET) {
_text = .;
_stext = .;
- /* bootstrapping code */
- HEAD_TEXT
- TEXT_TEXT
- SCHED_TEXT
- LOCK_TEXT
- KPROBES_TEXT
- SOFTIRQENTRY_TEXT
-#ifdef CONFIG_MITIGATION_RETPOLINE
- *(.text..__x86.indirect_thunk)
- *(.text..__x86.return_thunk)
-#endif
- STATIC_CALL_TEXT
-
ALIGN_ENTRY_TEXT_BEGIN
*(.text..__x86.rethunk_untrain)
ENTRY_TEXT
@@ -147,6 +134,19 @@ SECTIONS
*(.text..__x86.rethunk_safe)
#endif
ALIGN_ENTRY_TEXT_END
+
+ /* bootstrapping code */
+ HEAD_TEXT
+ TEXT_TEXT
+ SCHED_TEXT
+ LOCK_TEXT
+ KPROBES_TEXT
+ SOFTIRQENTRY_TEXT
+#ifdef CONFIG_MITIGATION_RETPOLINE
+ *(.text..__x86.indirect_thunk)
+ *(.text..__x86.return_thunk)
+#endif
+ STATIC_CALL_TEXT
*(.gnu.warning)
} :text = 0xcccccccc
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 6/7] x86/boot: Move .head.text into its own output section
2024-12-05 11:28 [PATCH v4 0/7] x86: Rid .head.text of all abs references Ard Biesheuvel
` (4 preceding siblings ...)
2024-12-05 11:28 ` [PATCH v4 5/7] x86/kernel: Move ENTRY_TEXT to the start of the image Ard Biesheuvel
@ 2024-12-05 11:28 ` Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 7/7] x86/boot: Reject absolute references in .head.text Ard Biesheuvel
2024-12-31 10:01 ` [PATCH v4 0/7] x86: Rid .head.text of all abs references Borislav Petkov
7 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2024-12-05 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: x86, Ard Biesheuvel, Tom Lendacky, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin
From: Ard Biesheuvel <ardb@kernel.org>
In order to be able to double check that vmlinux is emitted without
absolute symbol references in .head.text, it needs to be distinguishable
from the rest of .text in the ELF metadata.
So move .head.text into its own ELF section.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/kernel/vmlinux.lds.S | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 1ce7889cd12b..56cdf13611e3 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -135,8 +135,6 @@ SECTIONS
#endif
ALIGN_ENTRY_TEXT_END
- /* bootstrapping code */
- HEAD_TEXT
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
@@ -151,6 +149,11 @@ SECTIONS
} :text = 0xcccccccc
+ /* bootstrapping code */
+ .head.text : AT(ADDR(.head.text) - LOAD_OFFSET) {
+ HEAD_TEXT
+ } :text = 0xcccccccc
+
/* End of text section, which should occupy whole number of pages */
_etext = .;
. = ALIGN(PAGE_SIZE);
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 7/7] x86/boot: Reject absolute references in .head.text
2024-12-05 11:28 [PATCH v4 0/7] x86: Rid .head.text of all abs references Ard Biesheuvel
` (5 preceding siblings ...)
2024-12-05 11:28 ` [PATCH v4 6/7] x86/boot: Move .head.text into its own output section Ard Biesheuvel
@ 2024-12-05 11:28 ` Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-31 10:01 ` [PATCH v4 0/7] x86: Rid .head.text of all abs references Borislav Petkov
7 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2024-12-05 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: x86, Ard Biesheuvel, Tom Lendacky, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin
From: Ard Biesheuvel <ardb@kernel.org>
The .head.text section used to contain asm code that bootstrapped the
page tables and switched to the kernel virtual address space before
executing C code. The asm code carefully avoided dereferencing absolute
symbol references, as those will fault before the page tables are
installed.
Today, the .head.text section contains lots of C code too, and getting
the compiler to reason about absolute addresses taken from, e.g.,
section markers such as _text[] or _end[] but never use such absolute
references to access global variables [*] is intractible.
So instead, forbid the use of absolute references in .head.text
entirely, and rely on explicit arithmetic involving VA-to-PA offsets
generated by the asm startup code to construct virtual addresses where
needed (e.g., to construct the page tables).
Note that the 'relocs' tool is only used on the core kernel image when
building a relocatable image, but this is the default, and so adding the
check there is sufficient to catch new occurrences of code that use
absolute references before the kernel mapping is up.
[*] it is feasible when using PIC codegen but there is strong pushback
to using this for all of the core kernel, and using it only for
.head.text is not straight-forward.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/tools/relocs.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
index 27441e5863b2..e937be979ec8 100644
--- a/arch/x86/tools/relocs.c
+++ b/arch/x86/tools/relocs.c
@@ -841,10 +841,10 @@ static int is_percpu_sym(ElfW(Sym) *sym, const char *symname)
static int do_reloc64(struct section *sec, Elf_Rel *rel, ElfW(Sym) *sym,
const char *symname)
{
+ int headtext = !strcmp(sec_name(sec->shdr.sh_info), ".head.text");
unsigned r_type = ELF64_R_TYPE(rel->r_info);
ElfW(Addr) offset = rel->r_offset;
int shn_abs = (sym->st_shndx == SHN_ABS) && !is_reloc(S_REL, symname);
-
if (sym->st_shndx == SHN_UNDEF)
return 0;
@@ -900,6 +900,12 @@ static int do_reloc64(struct section *sec, Elf_Rel *rel, ElfW(Sym) *sym,
break;
}
+ if (headtext) {
+ die("Absolute reference to symbol '%s' not permitted in .head.text\n",
+ symname);
+ break;
+ }
+
/*
* Relocation offsets for 64 bit kernels are output
* as 32 bits and sign extended back to 64 bits when
--
2.47.0.338.g60cca15819-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip: x86/boot] x86/boot: Reject absolute references in .head.text
2024-12-05 11:28 ` [PATCH v4 7/7] x86/boot: Reject absolute references in .head.text Ard Biesheuvel
@ 2024-12-05 12:28 ` tip-bot2 for Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: tip-bot2 for Ard Biesheuvel @ 2024-12-05 12:28 UTC (permalink / raw)
To: linux-tip-commits
Cc: Ard Biesheuvel, Ingo Molnar, Linus Torvalds, H. Peter Anvin, x86,
linux-kernel
The following commit has been merged into the x86/boot branch of tip:
Commit-ID: faf0ed487415f76fe4acf7980ce360901f5e1698
Gitweb: https://git.kernel.org/tip/faf0ed487415f76fe4acf7980ce360901f5e1698
Author: Ard Biesheuvel <ardb@kernel.org>
AuthorDate: Thu, 05 Dec 2024 12:28:12 +01:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 05 Dec 2024 13:18:55 +01:00
x86/boot: Reject absolute references in .head.text
The .head.text section used to contain asm code that bootstrapped the
page tables and switched to the kernel virtual address space before
executing C code. The asm code carefully avoided dereferencing absolute
symbol references, as those will fault before the page tables are
installed.
Today, the .head.text section contains lots of C code too, and getting
the compiler to reason about absolute addresses taken from, e.g.,
section markers such as _text[] or _end[] but never use such absolute
references to access global variables [*] is intractible.
So instead, forbid the use of absolute references in .head.text
entirely, and rely on explicit arithmetic involving VA-to-PA offsets
generated by the asm startup code to construct virtual addresses where
needed (e.g., to construct the page tables).
Note that the 'relocs' tool is only used on the core kernel image when
building a relocatable image, but this is the default, and so adding the
check there is sufficient to catch new occurrences of code that use
absolute references before the kernel mapping is up.
[*] it is feasible when using PIC codegen but there is strong pushback
to using this for all of the core kernel, and using it only for
.head.text is not straight-forward.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lore.kernel.org/r/20241205112804.3416920-16-ardb+git@google.com
---
arch/x86/tools/relocs.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
index 27441e5..e937be9 100644
--- a/arch/x86/tools/relocs.c
+++ b/arch/x86/tools/relocs.c
@@ -841,10 +841,10 @@ static int is_percpu_sym(ElfW(Sym) *sym, const char *symname)
static int do_reloc64(struct section *sec, Elf_Rel *rel, ElfW(Sym) *sym,
const char *symname)
{
+ int headtext = !strcmp(sec_name(sec->shdr.sh_info), ".head.text");
unsigned r_type = ELF64_R_TYPE(rel->r_info);
ElfW(Addr) offset = rel->r_offset;
int shn_abs = (sym->st_shndx == SHN_ABS) && !is_reloc(S_REL, symname);
-
if (sym->st_shndx == SHN_UNDEF)
return 0;
@@ -900,6 +900,12 @@ static int do_reloc64(struct section *sec, Elf_Rel *rel, ElfW(Sym) *sym,
break;
}
+ if (headtext) {
+ die("Absolute reference to symbol '%s' not permitted in .head.text\n",
+ symname);
+ break;
+ }
+
/*
* Relocation offsets for 64 bit kernels are output
* as 32 bits and sign extended back to 64 bits when
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip: x86/boot] x86/boot: Move .head.text into its own output section
2024-12-05 11:28 ` [PATCH v4 6/7] x86/boot: Move .head.text into its own output section Ard Biesheuvel
@ 2024-12-05 12:28 ` tip-bot2 for Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: tip-bot2 for Ard Biesheuvel @ 2024-12-05 12:28 UTC (permalink / raw)
To: linux-tip-commits
Cc: Ard Biesheuvel, Ingo Molnar, Linus Torvalds, H. Peter Anvin, x86,
linux-kernel
The following commit has been merged into the x86/boot branch of tip:
Commit-ID: a6a4ae9c3f3a8894c54476cc842069f82af8361c
Gitweb: https://git.kernel.org/tip/a6a4ae9c3f3a8894c54476cc842069f82af8361c
Author: Ard Biesheuvel <ardb@kernel.org>
AuthorDate: Thu, 05 Dec 2024 12:28:11 +01:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 05 Dec 2024 13:18:55 +01:00
x86/boot: Move .head.text into its own output section
In order to be able to double check that vmlinux is emitted without
absolute symbol references in .head.text, it needs to be distinguishable
from the rest of .text in the ELF metadata.
So move .head.text into its own ELF section.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lore.kernel.org/r/20241205112804.3416920-15-ardb+git@google.com
---
arch/x86/kernel/vmlinux.lds.S | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 1ce7889..56cdf13 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -135,8 +135,6 @@ SECTIONS
#endif
ALIGN_ENTRY_TEXT_END
- /* bootstrapping code */
- HEAD_TEXT
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
@@ -151,6 +149,11 @@ SECTIONS
} :text = 0xcccccccc
+ /* bootstrapping code */
+ .head.text : AT(ADDR(.head.text) - LOAD_OFFSET) {
+ HEAD_TEXT
+ } :text = 0xcccccccc
+
/* End of text section, which should occupy whole number of pages */
_etext = .;
. = ALIGN(PAGE_SIZE);
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip: x86/boot] x86/kernel: Move ENTRY_TEXT to the start of the image
2024-12-05 11:28 ` [PATCH v4 5/7] x86/kernel: Move ENTRY_TEXT to the start of the image Ard Biesheuvel
@ 2024-12-05 12:28 ` tip-bot2 for Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: tip-bot2 for Ard Biesheuvel @ 2024-12-05 12:28 UTC (permalink / raw)
To: linux-tip-commits
Cc: Ard Biesheuvel, Ingo Molnar, Linus Torvalds, H. Peter Anvin, x86,
linux-kernel
The following commit has been merged into the x86/boot branch of tip:
Commit-ID: 35350eb689e68897d996b762832782e2e791eb74
Gitweb: https://git.kernel.org/tip/35350eb689e68897d996b762832782e2e791eb74
Author: Ard Biesheuvel <ardb@kernel.org>
AuthorDate: Thu, 05 Dec 2024 12:28:10 +01:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 05 Dec 2024 13:18:55 +01:00
x86/kernel: Move ENTRY_TEXT to the start of the image
Since commit:
7734a0f31e99 ("x86/boot: Robustify calling startup_{32,64}() from the decompressor code")
it is no longer necessary for .head.text to appear at the start of the
image. Since ENTRY_TEXT needs to appear PMD-aligned, it is easier to
just place it at the start of the image, rather than line it up with the
end of the .text section. The amount of padding required should be the
same, but this arrangement also permits .head.text to be split off and
emitted separately, which is needed by a subsequent change.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lore.kernel.org/r/20241205112804.3416920-14-ardb+git@google.com
---
arch/x86/kernel/vmlinux.lds.S | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index fab3ac9..1ce7889 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -121,19 +121,6 @@ SECTIONS
.text : AT(ADDR(.text) - LOAD_OFFSET) {
_text = .;
_stext = .;
- /* bootstrapping code */
- HEAD_TEXT
- TEXT_TEXT
- SCHED_TEXT
- LOCK_TEXT
- KPROBES_TEXT
- SOFTIRQENTRY_TEXT
-#ifdef CONFIG_MITIGATION_RETPOLINE
- *(.text..__x86.indirect_thunk)
- *(.text..__x86.return_thunk)
-#endif
- STATIC_CALL_TEXT
-
ALIGN_ENTRY_TEXT_BEGIN
*(.text..__x86.rethunk_untrain)
ENTRY_TEXT
@@ -147,6 +134,19 @@ SECTIONS
*(.text..__x86.rethunk_safe)
#endif
ALIGN_ENTRY_TEXT_END
+
+ /* bootstrapping code */
+ HEAD_TEXT
+ TEXT_TEXT
+ SCHED_TEXT
+ LOCK_TEXT
+ KPROBES_TEXT
+ SOFTIRQENTRY_TEXT
+#ifdef CONFIG_MITIGATION_RETPOLINE
+ *(.text..__x86.indirect_thunk)
+ *(.text..__x86.return_thunk)
+#endif
+ STATIC_CALL_TEXT
*(.gnu.warning)
} :text = 0xcccccccc
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip: x86/boot] x86/boot: Disable UBSAN in early boot code
2024-12-05 11:28 ` [PATCH v4 4/7] x86/boot: Disable UBSAN in early boot code Ard Biesheuvel
@ 2024-12-05 12:28 ` tip-bot2 for Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: tip-bot2 for Ard Biesheuvel @ 2024-12-05 12:28 UTC (permalink / raw)
To: linux-tip-commits
Cc: Ard Biesheuvel, Ingo Molnar, Linus Torvalds, H. Peter Anvin, x86,
linux-kernel
The following commit has been merged into the x86/boot branch of tip:
Commit-ID: 3b6f99a94b04b389292590840d96342b7dd08941
Gitweb: https://git.kernel.org/tip/3b6f99a94b04b389292590840d96342b7dd08941
Author: Ard Biesheuvel <ardb@kernel.org>
AuthorDate: Thu, 05 Dec 2024 12:28:09 +01:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 05 Dec 2024 13:18:54 +01:00
x86/boot: Disable UBSAN in early boot code
The early boot code runs from a 1:1 mapping of memory, and may execute
before the kernel virtual mapping is even up. This means absolute symbol
references cannot be permitted in this code.
UBSAN injects references to global data structures into the code, and
without -fPIC, those references are emitted as absolute references to
kernel virtual addresses. Accessing those will fault before the kernel
virtual mapping is up, so UBSAN needs to be disabled in early boot code.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lore.kernel.org/r/20241205112804.3416920-13-ardb+git@google.com
---
arch/x86/coco/sev/shared.c | 7 ++++---
arch/x86/include/asm/init.h | 2 +-
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/x86/coco/sev/shared.c b/arch/x86/coco/sev/shared.c
index afb7ffc..96023bd 100644
--- a/arch/x86/coco/sev/shared.c
+++ b/arch/x86/coco/sev/shared.c
@@ -498,7 +498,7 @@ static const struct snp_cpuid_table *snp_cpuid_get_table(void)
*
* Return: XSAVE area size on success, 0 otherwise.
*/
-static u32 snp_cpuid_calc_xsave_size(u64 xfeatures_en, bool compacted)
+static u32 __head snp_cpuid_calc_xsave_size(u64 xfeatures_en, bool compacted)
{
const struct snp_cpuid_table *cpuid_table = snp_cpuid_get_table();
u64 xfeatures_found = 0;
@@ -576,8 +576,9 @@ static void snp_cpuid_hv(struct ghcb *ghcb, struct es_em_ctxt *ctxt, struct cpui
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_CPUID_HV);
}
-static int snp_cpuid_postprocess(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
- struct cpuid_leaf *leaf)
+static int __head
+snp_cpuid_postprocess(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
+ struct cpuid_leaf *leaf)
{
struct cpuid_leaf leaf_hv = *leaf;
diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h
index 14d7272..0e82ebc 100644
--- a/arch/x86/include/asm/init.h
+++ b/arch/x86/include/asm/init.h
@@ -2,7 +2,7 @@
#ifndef _ASM_X86_INIT_H
#define _ASM_X86_INIT_H
-#define __head __section(".head.text")
+#define __head __section(".head.text") __no_sanitize_undefined
struct x86_mapping_info {
void *(*alloc_pgt_page)(void *); /* allocate buf for page table */
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip: x86/boot] x86/boot/64: Avoid intentional absolute symbol references in .head.text
2024-12-05 11:28 ` [PATCH v4 3/7] x86/boot/64: Avoid intentional absolute symbol references in .head.text Ard Biesheuvel
@ 2024-12-05 12:28 ` tip-bot2 for Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: tip-bot2 for Ard Biesheuvel @ 2024-12-05 12:28 UTC (permalink / raw)
To: linux-tip-commits
Cc: Ard Biesheuvel, Ingo Molnar, Linus Torvalds, H. Peter Anvin, x86,
linux-kernel
The following commit has been merged into the x86/boot branch of tip:
Commit-ID: 0d9b9a328cb605419ed046d341dc2a3d66ee0256
Gitweb: https://git.kernel.org/tip/0d9b9a328cb605419ed046d341dc2a3d66ee0256
Author: Ard Biesheuvel <ardb@kernel.org>
AuthorDate: Thu, 05 Dec 2024 12:28:08 +01:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 05 Dec 2024 13:18:54 +01:00
x86/boot/64: Avoid intentional absolute symbol references in .head.text
The code in .head.text executes from a 1:1 mapping and cannot generally
refer to global variables using their kernel virtual addresses. However,
there are some occurrences of such references that are valid: the kernel
virtual addresses of _text and _end are needed to populate the page
tables correctly, and some other section markers are used in a similar
way.
To avoid the need for making exceptions to the rule that .head.text must
not contain any absolute symbol references, derive these addresses from
the RIP-relative 1:1 mapped physical addresses, which can be safely
determined using RIP_REL_REF().
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lore.kernel.org/r/20241205112804.3416920-12-ardb+git@google.com
---
arch/x86/kernel/head64.c | 30 ++++++++++++++++++------------
1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index a7cd405..54f9a8f 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -91,9 +91,11 @@ static inline bool check_la57_support(void)
return true;
}
-static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdval_t *pmd)
+static unsigned long __head sme_postprocess_startup(struct boot_params *bp,
+ pmdval_t *pmd,
+ unsigned long p2v_offset)
{
- unsigned long vaddr, vaddr_end;
+ unsigned long paddr, paddr_end;
int i;
/* Encrypt the kernel and related (if SME is active) */
@@ -106,10 +108,10 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdv
* attribute.
*/
if (sme_get_me_mask()) {
- vaddr = (unsigned long)__start_bss_decrypted;
- vaddr_end = (unsigned long)__end_bss_decrypted;
+ paddr = (unsigned long)&RIP_REL_REF(__start_bss_decrypted);
+ paddr_end = (unsigned long)&RIP_REL_REF(__end_bss_decrypted);
- for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
+ for (; paddr < paddr_end; paddr += PMD_SIZE) {
/*
* On SNP, transition the page to shared in the RMP table so that
* it is consistent with the page table attribute change.
@@ -118,11 +120,11 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdv
* mapping (kernel .text). PVALIDATE, by way of
* early_snp_set_memory_shared(), requires a valid virtual
* address but the kernel is currently running off of the identity
- * mapping so use __pa() to get a *currently* valid virtual address.
+ * mapping so use the PA to get a *currently* valid virtual address.
*/
- early_snp_set_memory_shared(__pa(vaddr), __pa(vaddr), PTRS_PER_PMD);
+ early_snp_set_memory_shared(paddr, paddr, PTRS_PER_PMD);
- i = pmd_index(vaddr);
+ i = pmd_index(paddr - p2v_offset);
pmd[i] -= sme_get_me_mask();
}
}
@@ -146,6 +148,7 @@ unsigned long __head __startup_64(unsigned long p2v_offset,
{
pmd_t (*early_pgts)[PTRS_PER_PMD] = RIP_REL_REF(early_dynamic_pgts);
unsigned long physaddr = (unsigned long)&RIP_REL_REF(_text);
+ unsigned long va_text, va_end;
unsigned long pgtable_flags;
unsigned long load_delta;
pgdval_t *pgd;
@@ -172,6 +175,9 @@ unsigned long __head __startup_64(unsigned long p2v_offset,
if (load_delta & ~PMD_MASK)
for (;;);
+ va_text = physaddr - p2v_offset;
+ va_end = (unsigned long)&RIP_REL_REF(_end) - p2v_offset;
+
/* Include the SME encryption mask in the fixup value */
load_delta += sme_get_me_mask();
@@ -232,7 +238,7 @@ unsigned long __head __startup_64(unsigned long p2v_offset,
pmd_entry += sme_get_me_mask();
pmd_entry += physaddr;
- for (i = 0; i < DIV_ROUND_UP(_end - _text, PMD_SIZE); i++) {
+ for (i = 0; i < DIV_ROUND_UP(va_end - va_text, PMD_SIZE); i++) {
int idx = i + (physaddr >> PMD_SHIFT);
pmd[idx % PTRS_PER_PMD] = pmd_entry + i * PMD_SIZE;
@@ -257,11 +263,11 @@ unsigned long __head __startup_64(unsigned long p2v_offset,
pmd = &RIP_REL_REF(level2_kernel_pgt)->pmd;
/* invalidate pages before the kernel image */
- for (i = 0; i < pmd_index((unsigned long)_text); i++)
+ for (i = 0; i < pmd_index(va_text); i++)
pmd[i] &= ~_PAGE_PRESENT;
/* fixup pages that are part of the kernel image */
- for (; i <= pmd_index((unsigned long)_end); i++)
+ for (; i <= pmd_index(va_end); i++)
if (pmd[i] & _PAGE_PRESENT)
pmd[i] += load_delta;
@@ -269,7 +275,7 @@ unsigned long __head __startup_64(unsigned long p2v_offset,
for (; i < PTRS_PER_PMD; i++)
pmd[i] &= ~_PAGE_PRESENT;
- return sme_postprocess_startup(bp, pmd);
+ return sme_postprocess_startup(bp, pmd, p2v_offset);
}
/* Wipe all early page tables except for the kernel symbol map */
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip: x86/boot] x86/boot/64: Determine VA/PA offset before entering C code
2024-12-05 11:28 ` [PATCH v4 2/7] x86/boot/64: Determine VA/PA offset before entering C code Ard Biesheuvel
@ 2024-12-05 12:28 ` tip-bot2 for Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: tip-bot2 for Ard Biesheuvel @ 2024-12-05 12:28 UTC (permalink / raw)
To: linux-tip-commits
Cc: Ard Biesheuvel, Ingo Molnar, Linus Torvalds, H. Peter Anvin, x86,
linux-kernel
The following commit has been merged into the x86/boot branch of tip:
Commit-ID: 093562198e1a6360672954293753f4c6cb9a3316
Gitweb: https://git.kernel.org/tip/093562198e1a6360672954293753f4c6cb9a3316
Author: Ard Biesheuvel <ardb@kernel.org>
AuthorDate: Thu, 05 Dec 2024 12:28:07 +01:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 05 Dec 2024 13:18:54 +01:00
x86/boot/64: Determine VA/PA offset before entering C code
Implicit absolute symbol references (e.g., taking the address of a
global variable) must be avoided in the C code that runs from the early
1:1 mapping of the kernel, given that this is a practice that violates
assumptions on the part of the toolchain. I.e., RIP-relative and
absolute references are expected to produce the same values, and so the
compiler is free to choose either. However, the code currently assumes
that RIP-relative references are never emitted here.
So an explicit virtual-to-physical offset needs to be used instead to
derive the kernel virtual addresses of _text and _end, instead of simply
taking the addresses and assuming that the compiler will not choose to
use a RIP-relative references in this particular case.
Currently, phys_base is already used to perform such calculations, but
it is derived from the kernel virtual address of _text, which is taken
using an implicit absolute symbol reference. So instead, derive this
VA-to-PA offset in asm code, and pass it to the C startup code.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lore.kernel.org/r/20241205112804.3416920-11-ardb+git@google.com
---
arch/x86/include/asm/setup.h | 2 +-
arch/x86/kernel/head64.c | 8 +++++---
arch/x86/kernel/head_64.S | 12 +++++++++---
3 files changed, 15 insertions(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index 0667b2a..85f4fde 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -49,7 +49,7 @@ extern unsigned long saved_video_mode;
extern void reserve_standard_io_resources(void);
extern void i386_reserve_resources(void);
-extern unsigned long __startup_64(unsigned long physaddr, struct boot_params *bp);
+extern unsigned long __startup_64(unsigned long p2v_offset, struct boot_params *bp);
extern void startup_64_setup_gdt_idt(void);
extern void early_setup_idt(void);
extern void __init do_early_exception(struct pt_regs *regs, int trapnr);
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 4b9d455..a7cd405 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -138,12 +138,14 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdv
* doesn't have to generate PC-relative relocations when accessing globals from
* that function. Clang actually does not generate them, which leads to
* boot-time crashes. To work around this problem, every global pointer must
- * be accessed using RIP_REL_REF().
+ * be accessed using RIP_REL_REF(). Kernel virtual addresses can be determined
+ * by subtracting p2v_offset from the RIP-relative address.
*/
-unsigned long __head __startup_64(unsigned long physaddr,
+unsigned long __head __startup_64(unsigned long p2v_offset,
struct boot_params *bp)
{
pmd_t (*early_pgts)[PTRS_PER_PMD] = RIP_REL_REF(early_dynamic_pgts);
+ unsigned long physaddr = (unsigned long)&RIP_REL_REF(_text);
unsigned long pgtable_flags;
unsigned long load_delta;
pgdval_t *pgd;
@@ -163,7 +165,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
* Compute the delta between the address I am compiled to run at
* and the address I am actually running at.
*/
- load_delta = physaddr - (unsigned long)(_text - __START_KERNEL_map);
+ load_delta = __START_KERNEL_map + p2v_offset;
RIP_REL_REF(phys_base) = load_delta;
/* Is the address not 2M aligned? */
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 56163e2..31345e0 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -95,12 +95,18 @@ SYM_CODE_START_NOALIGN(startup_64)
call verify_cpu
/*
+ * Derive the kernel's physical-to-virtual offset from the physical and
+ * virtual addresses of common_startup_64().
+ */
+ leaq common_startup_64(%rip), %rdi
+ subq .Lcommon_startup_64(%rip), %rdi
+
+ /*
* Perform pagetable fixups. Additionally, if SME is active, encrypt
* the kernel and retrieve the modifier (SME encryption mask if SME
* is active) to be added to the initial pgdir entry that will be
* programmed into CR3.
*/
- leaq _text(%rip), %rdi
movq %r15, %rsi
call __startup_64
@@ -128,11 +134,11 @@ SYM_CODE_START_NOALIGN(startup_64)
/* Branch to the common startup code at its kernel virtual address */
ANNOTATE_RETPOLINE_SAFE
- jmp *0f(%rip)
+ jmp *.Lcommon_startup_64(%rip)
SYM_CODE_END(startup_64)
__INITRODATA
-0: .quad common_startup_64
+SYM_DATA_LOCAL(.Lcommon_startup_64, .quad common_startup_64)
.text
SYM_CODE_START(secondary_startup_64)
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [tip: x86/boot] x86/sev: Avoid WARN()s and panic()s in early boot code
2024-12-05 11:28 ` [PATCH v4 1/7] x86/sev: Avoid WARN()s and panic()s in early boot code Ard Biesheuvel
@ 2024-12-05 12:28 ` tip-bot2 for Ard Biesheuvel
2025-01-06 15:23 ` [PATCH v4 1/7] " Tom Lendacky
1 sibling, 0 replies; 24+ messages in thread
From: tip-bot2 for Ard Biesheuvel @ 2024-12-05 12:28 UTC (permalink / raw)
To: linux-tip-commits
Cc: Ard Biesheuvel, Ingo Molnar, Linus Torvalds, H. Peter Anvin, x86,
linux-kernel
The following commit has been merged into the x86/boot branch of tip:
Commit-ID: 09d35045cd0f4265cf1dfe18ef83285fdc294688
Gitweb: https://git.kernel.org/tip/09d35045cd0f4265cf1dfe18ef83285fdc294688
Author: Ard Biesheuvel <ardb@kernel.org>
AuthorDate: Thu, 05 Dec 2024 12:28:06 +01:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 05 Dec 2024 13:18:54 +01:00
x86/sev: Avoid WARN()s and panic()s in early boot code
Using WARN() or panic() while executing from the early 1:1 mapping is
unlikely to do anything useful: the string literals are passed using
their kernel virtual addresses which are not even mapped yet. But even
if they were, calling into the printk() machinery from the early 1:1
mapped code is not going to get very far.
So drop the WARN()s entirely, and replace panic() with a deadloop.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lore.kernel.org/r/20241205112804.3416920-10-ardb+git@google.com
---
arch/x86/coco/sev/core.c | 15 +++++----------
arch/x86/coco/sev/shared.c | 9 +++++----
2 files changed, 10 insertions(+), 14 deletions(-)
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
index c5b0148..499b419 100644
--- a/arch/x86/coco/sev/core.c
+++ b/arch/x86/coco/sev/core.c
@@ -777,15 +777,10 @@ early_set_pages_state(unsigned long vaddr, unsigned long paddr,
val = sev_es_rd_ghcb_msr();
- if (WARN(GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP,
- "Wrong PSC response code: 0x%x\n",
- (unsigned int)GHCB_RESP_CODE(val)))
+ if (GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP)
goto e_term;
- if (WARN(GHCB_MSR_PSC_RESP_VAL(val),
- "Failed to change page state to '%s' paddr 0x%lx error 0x%llx\n",
- op == SNP_PAGE_STATE_PRIVATE ? "private" : "shared",
- paddr, GHCB_MSR_PSC_RESP_VAL(val)))
+ if (GHCB_MSR_PSC_RESP_VAL(val))
goto e_term;
/* Page validation must be performed after changing to private */
@@ -821,7 +816,7 @@ void __head early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_PRIVATE);
}
-void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+void __head early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
unsigned long npages)
{
/*
@@ -2361,8 +2356,8 @@ static __head void svsm_setup(struct cc_blob_sev_info *cc_info)
call.rax = SVSM_CORE_CALL(SVSM_CORE_REMAP_CA);
call.rcx = pa;
ret = svsm_perform_call_protocol(&call);
- if (ret)
- panic("Can't remap the SVSM CA, ret=%d, rax_out=0x%llx\n", ret, call.rax_out);
+ while (ret)
+ cpu_relax(); /* too early to panic */
RIP_REL_REF(boot_svsm_caa) = (struct svsm_ca *)pa;
RIP_REL_REF(boot_svsm_caa_pa) = pa;
diff --git a/arch/x86/coco/sev/shared.c b/arch/x86/coco/sev/shared.c
index 71de531..afb7ffc 100644
--- a/arch/x86/coco/sev/shared.c
+++ b/arch/x86/coco/sev/shared.c
@@ -1243,7 +1243,7 @@ static void svsm_pval_terminate(struct svsm_pvalidate_call *pc, int ret, u64 svs
__pval_terminate(pfn, action, page_size, ret, svsm_ret);
}
-static void svsm_pval_4k_page(unsigned long paddr, bool validate)
+static void __head svsm_pval_4k_page(unsigned long paddr, bool validate)
{
struct svsm_pvalidate_call *pc;
struct svsm_call call = {};
@@ -1275,12 +1275,13 @@ static void svsm_pval_4k_page(unsigned long paddr, bool validate)
ret = svsm_perform_call_protocol(&call);
if (ret)
- svsm_pval_terminate(pc, ret, call.rax_out);
+ sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
native_local_irq_restore(flags);
}
-static void pvalidate_4k_page(unsigned long vaddr, unsigned long paddr, bool validate)
+static void __head pvalidate_4k_page(unsigned long vaddr, unsigned long paddr,
+ bool validate)
{
int ret;
@@ -1293,7 +1294,7 @@ static void pvalidate_4k_page(unsigned long vaddr, unsigned long paddr, bool val
} else {
ret = pvalidate(vaddr, RMP_PG_SIZE_4K, validate);
if (ret)
- __pval_terminate(PHYS_PFN(paddr), validate, RMP_PG_SIZE_4K, ret, 0);
+ sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
}
}
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v4 0/7] x86: Rid .head.text of all abs references
2024-12-05 11:28 [PATCH v4 0/7] x86: Rid .head.text of all abs references Ard Biesheuvel
` (6 preceding siblings ...)
2024-12-05 11:28 ` [PATCH v4 7/7] x86/boot: Reject absolute references in .head.text Ard Biesheuvel
@ 2024-12-31 10:01 ` Borislav Petkov
2024-12-31 10:12 ` Ard Biesheuvel
7 siblings, 1 reply; 24+ messages in thread
From: Borislav Petkov @ 2024-12-31 10:01 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: linux-kernel, x86, Ard Biesheuvel, Tom Lendacky, Thomas Gleixner,
Ingo Molnar, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin, linux-toolchains
+ linux-toolchains.
Hi Ard,
On Thu, Dec 05, 2024 at 12:28:05PM +0100, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
>
> This series removes the last remaining absolute symbol references from
> .head.text. Doing so is necessary because code in this section may be
> called from a 1:1 mapping of memory, which deviates from the mapping
> this code was linked and/or relocated to run at. This is not something
> that the toolchains support: even PIC/PIE code is still assumed to
> execute from the same mapping that it was relocated to run from by the
> startup code or dynamic loader. This means we are basically on our own
> here, and need to add measures to ensure the code works as expected in
> this manner.
>
> Given that the startup code needs to create the kernel virtual mapping
> in the page tables, early references to some kernel virtual addresses
> are valid even if they cannot be dereferenced yet. To avoid having to
> make this distinction at build time, patches #2 and #3 replace such
> valid references with RIP-relative references with an offset applied.
>
> Patch #1 removes some absolute references from .head.text that don't
> need to be there in the first place.
dunno if you've seen this already and maybe it is not related but the error
message said ".head.text"...
Absolute reference to symbol '.data' not permitted in .head.text
make[3]: *** [arch/x86/Makefile.postlink:32: vmlinux] Error 1
make[2]: *** [scripts/Makefile.vmlinux:77: vmlinux] Error 2
make[2]: *** Deleting file 'vmlinux'
make[1]: *** [/home/amd/bpetkov/kernel/linux/Makefile:1225: vmlinux] Error 2
make[1]: *** Waiting for unfinished jobs....
make: *** [Makefile:251: __sub-make] Error 2
That's an allmodconfig with
Ubuntu clang version 14.0.0-1ubuntu1.1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 0/7] x86: Rid .head.text of all abs references
2024-12-31 10:01 ` [PATCH v4 0/7] x86: Rid .head.text of all abs references Borislav Petkov
@ 2024-12-31 10:12 ` Ard Biesheuvel
2024-12-31 10:35 ` Borislav Petkov
0 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2024-12-31 10:12 UTC (permalink / raw)
To: Borislav Petkov
Cc: Ard Biesheuvel, linux-kernel, x86, Tom Lendacky, Thomas Gleixner,
Ingo Molnar, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin, linux-toolchains
On Tue, 31 Dec 2024 at 11:02, Borislav Petkov <bp@alien8.de> wrote:
>
> + linux-toolchains.
>
> Hi Ard,
>
> On Thu, Dec 05, 2024 at 12:28:05PM +0100, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > This series removes the last remaining absolute symbol references from
> > .head.text. Doing so is necessary because code in this section may be
> > called from a 1:1 mapping of memory, which deviates from the mapping
> > this code was linked and/or relocated to run at. This is not something
> > that the toolchains support: even PIC/PIE code is still assumed to
> > execute from the same mapping that it was relocated to run from by the
> > startup code or dynamic loader. This means we are basically on our own
> > here, and need to add measures to ensure the code works as expected in
> > this manner.
> >
> > Given that the startup code needs to create the kernel virtual mapping
> > in the page tables, early references to some kernel virtual addresses
> > are valid even if they cannot be dereferenced yet. To avoid having to
> > make this distinction at build time, patches #2 and #3 replace such
> > valid references with RIP-relative references with an offset applied.
> >
> > Patch #1 removes some absolute references from .head.text that don't
> > need to be there in the first place.
>
> dunno if you've seen this already and maybe it is not related but the error
> message said ".head.text"...
>
> Absolute reference to symbol '.data' not permitted in .head.text
> make[3]: *** [arch/x86/Makefile.postlink:32: vmlinux] Error 1
> make[2]: *** [scripts/Makefile.vmlinux:77: vmlinux] Error 2
> make[2]: *** Deleting file 'vmlinux'
> make[1]: *** [/home/amd/bpetkov/kernel/linux/Makefile:1225: vmlinux] Error 2
> make[1]: *** Waiting for unfinished jobs....
> make: *** [Makefile:251: __sub-make] Error 2
>
> That's an allmodconfig with
>
> Ubuntu clang version 14.0.0-1ubuntu1.1
> Target: x86_64-pc-linux-gnu
> Thread model: posix
> InstalledDir: /usr/bin
>
This is definitely related, and likely means the new code is working
as expected, and flagging an absolute reference emitted by, e.g., one
of the sanitizers that will blow up if it ever gets dereferenced.
I'll look into this asap, i.e., in a couple of days.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 0/7] x86: Rid .head.text of all abs references
2024-12-31 10:12 ` Ard Biesheuvel
@ 2024-12-31 10:35 ` Borislav Petkov
2024-12-31 19:29 ` Ard Biesheuvel
0 siblings, 1 reply; 24+ messages in thread
From: Borislav Petkov @ 2024-12-31 10:35 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Ard Biesheuvel, linux-kernel, x86, Tom Lendacky, Thomas Gleixner,
Ingo Molnar, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin, linux-toolchains
On Tue, Dec 31, 2024 at 11:12:55AM +0100, Ard Biesheuvel wrote:
> I'll look into this asap, i.e., in a couple of days.
:-P
Thanks!
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 0/7] x86: Rid .head.text of all abs references
2024-12-31 10:35 ` Borislav Petkov
@ 2024-12-31 19:29 ` Ard Biesheuvel
2025-01-01 2:43 ` Nathan Chancellor
0 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2024-12-31 19:29 UTC (permalink / raw)
To: Borislav Petkov, Nathan Chancellor, clang-built-linux
Cc: Ard Biesheuvel, linux-kernel, x86, Tom Lendacky, Thomas Gleixner,
Ingo Molnar, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin, linux-toolchains
(cc Nathan)
On Tue, 31 Dec 2024 at 11:35, Borislav Petkov <bp@alien8.de> wrote:
>
> On Tue, Dec 31, 2024 at 11:12:55AM +0100, Ard Biesheuvel wrote:
> > I'll look into this asap, i.e., in a couple of days.
>
> :-P
>
> Thanks!
>
I had a quick look, and managed to reproduce it with Clang 14 but not
with Clang 18.
It looks like UBSAN is emitting some instrumentation here, in spite of
the __no_sanitize_undefined annotation (via __head) on
pvalidate_4k_page():
arch/x86/coco/sev/core.o:
0000000000000a00 <pvalidate_4k_page>:
...
b72: 40 88 de mov %bl,%sil
b75: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
b78: R_X86_64_32S .data+0xb0
b7c: e8 00 00 00 00 callq b81 <pvalidate_4k_page+0x181>
b7d: R_X86_64_PLT32 __ubsan_handle_load_invalid_value-0x4
So as far as this series is concerned, things are working correctly,
and an absolute reference to .data is being flagged in code that may
execute before the absolute address in question is even mapped.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 0/7] x86: Rid .head.text of all abs references
2024-12-31 19:29 ` Ard Biesheuvel
@ 2025-01-01 2:43 ` Nathan Chancellor
2025-01-01 8:01 ` Ard Biesheuvel
0 siblings, 1 reply; 24+ messages in thread
From: Nathan Chancellor @ 2025-01-01 2:43 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Borislav Petkov, clang-built-linux, Ard Biesheuvel, linux-kernel,
x86, Tom Lendacky, Thomas Gleixner, Ingo Molnar, Dave Hansen,
Andy Lutomirski, Arnd Bergmann, Kees Cook, Brian Gerst,
Kevin Loughlin, linux-toolchains
Hi Ard,
On Tue, Dec 31, 2024 at 08:29:17PM +0100, Ard Biesheuvel wrote:
> (cc Nathan)
Thanks for the CC.
> On Tue, 31 Dec 2024 at 11:35, Borislav Petkov <bp@alien8.de> wrote:
> >
> > On Tue, Dec 31, 2024 at 11:12:55AM +0100, Ard Biesheuvel wrote:
> > > I'll look into this asap, i.e., in a couple of days.
> >
> > :-P
> >
> > Thanks!
> >
>
> I had a quick look, and managed to reproduce it with Clang 14 but not
> with Clang 18.
>
> It looks like UBSAN is emitting some instrumentation here, in spite of
> the __no_sanitize_undefined annotation (via __head) on
> pvalidate_4k_page():
>
> arch/x86/coco/sev/core.o:
>
> 0000000000000a00 <pvalidate_4k_page>:
> ...
> b72: 40 88 de mov %bl,%sil
> b75: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
> b78: R_X86_64_32S .data+0xb0
> b7c: e8 00 00 00 00 callq b81 <pvalidate_4k_page+0x181>
> b7d: R_X86_64_PLT32 __ubsan_handle_load_invalid_value-0x4
>
> So as far as this series is concerned, things are working correctly,
> and an absolute reference to .data is being flagged in code that may
> execute before the absolute address in question is even mapped.
It appears that this is related to UBSAN_BOOL. This is reproducible with
just:
$ echo 'CONFIG_AMD_MEM_ENCRYPT=y
CONFIG_UBSAN=y
CONFIG_UBSAN_BOOL=y
# CONFIG_UBSAN_ALIGNMENT is not set
# CONFIG_UBSAN_BOUNDS is not set
# CONFIG_UBSAN_DIV_ZERO is not set
# CONFIG_UBSAN_ENUM is not set
# CONFIG_UBSAN_SIGNED_WRAP is not set
# CONFIG_UBSAN_SHIFT is not set
# CONFIG_UBSAN_TRAP is not set
# CONFIG_UBSAN_UNREACHABLE is not set' >kernel/configs/repro.config
$ make -skj"$(nproc)" ARCH=x86_64 LLVM=1 mrproper defconfig repro.config vmlinux
Absolute reference to symbol '.data' not permitted in .head.text
make[5]: *** [arch/x86/Makefile.postlink:32: vmlinux] Error 1
...
Given that this appears in LLVM 14 but not LLVM 15 and newer, I reverse
bisected the fix in LLVM to [1], which was actually a fix from a report
from Linus [2]. That seems like a reasonable change to blame, as UBSAN
is generating this check from the asm() in pvalidate() and after the
LLVM fix, that check is no longer generated.
It does seem fishy that __no_sanitize_undefined does not prevent the
generation of that check... Plugging Linus's original reproducer from
[2] into Compiler Explorer [3], it seems like __no_sanitize_undefined
does get respected. It is my understanding that inlining functions that
do not have attributes that disable instrumentation into ones that do is
supposed to remove the instrumentation, correct? It seems like
pvalidate() does get inlined into pvalidate_4k_page() but the
instrumentation remains. Explicitly adding __no_sanitize_undefined to
pvalidate() hides this for me.
[1]: https://github.com/llvm/llvm-project/commit/92c1bc61586c9d6c7bf0c36b1005fe00b4f48cc0
[2]: https://github.com/llvm/llvm-project/issues/56568
[3]: https://godbolt.org/z/cxhW5orxr
Cheers,
Nathan
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 91f08af31078..7887bac1fbab 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -414,7 +414,7 @@ static inline int rmpadjust(unsigned long vaddr, bool rmp_psize, unsigned long a
return rc;
}
-static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
+static inline __no_sanitize_undefined int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
{
bool no_rmpupdate;
int rc;
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v4 0/7] x86: Rid .head.text of all abs references
2025-01-01 2:43 ` Nathan Chancellor
@ 2025-01-01 8:01 ` Ard Biesheuvel
2025-01-01 10:39 ` Borislav Petkov
0 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2025-01-01 8:01 UTC (permalink / raw)
To: Nathan Chancellor
Cc: Borislav Petkov, clang-built-linux, Ard Biesheuvel, linux-kernel,
x86, Tom Lendacky, Thomas Gleixner, Ingo Molnar, Dave Hansen,
Andy Lutomirski, Arnd Bergmann, Kees Cook, Brian Gerst,
Kevin Loughlin, linux-toolchains
On Wed, 1 Jan 2025 at 03:43, Nathan Chancellor <nathan@kernel.org> wrote:
>
> Hi Ard,
>
> On Tue, Dec 31, 2024 at 08:29:17PM +0100, Ard Biesheuvel wrote:
> > (cc Nathan)
>
> Thanks for the CC.
>
> > On Tue, 31 Dec 2024 at 11:35, Borislav Petkov <bp@alien8.de> wrote:
> > >
> > > On Tue, Dec 31, 2024 at 11:12:55AM +0100, Ard Biesheuvel wrote:
> > > > I'll look into this asap, i.e., in a couple of days.
> > >
> > > :-P
> > >
> > > Thanks!
> > >
> >
> > I had a quick look, and managed to reproduce it with Clang 14 but not
> > with Clang 18.
> >
> > It looks like UBSAN is emitting some instrumentation here, in spite of
> > the __no_sanitize_undefined annotation (via __head) on
> > pvalidate_4k_page():
> >
> > arch/x86/coco/sev/core.o:
> >
> > 0000000000000a00 <pvalidate_4k_page>:
> > ...
> > b72: 40 88 de mov %bl,%sil
> > b75: 48 c7 c7 00 00 00 00 mov $0x0,%rdi
> > b78: R_X86_64_32S .data+0xb0
> > b7c: e8 00 00 00 00 callq b81 <pvalidate_4k_page+0x181>
> > b7d: R_X86_64_PLT32 __ubsan_handle_load_invalid_value-0x4
> >
> > So as far as this series is concerned, things are working correctly,
> > and an absolute reference to .data is being flagged in code that may
> > execute before the absolute address in question is even mapped.
>
> It appears that this is related to UBSAN_BOOL. This is reproducible with
> just:
>
> $ echo 'CONFIG_AMD_MEM_ENCRYPT=y
> CONFIG_UBSAN=y
> CONFIG_UBSAN_BOOL=y
> # CONFIG_UBSAN_ALIGNMENT is not set
> # CONFIG_UBSAN_BOUNDS is not set
> # CONFIG_UBSAN_DIV_ZERO is not set
> # CONFIG_UBSAN_ENUM is not set
> # CONFIG_UBSAN_SIGNED_WRAP is not set
> # CONFIG_UBSAN_SHIFT is not set
> # CONFIG_UBSAN_TRAP is not set
> # CONFIG_UBSAN_UNREACHABLE is not set' >kernel/configs/repro.config
>
> $ make -skj"$(nproc)" ARCH=x86_64 LLVM=1 mrproper defconfig repro.config vmlinux
> Absolute reference to symbol '.data' not permitted in .head.text
> make[5]: *** [arch/x86/Makefile.postlink:32: vmlinux] Error 1
> ...
>
> Given that this appears in LLVM 14 but not LLVM 15 and newer, I reverse
> bisected the fix in LLVM to [1], which was actually a fix from a report
> from Linus [2]. That seems like a reasonable change to blame, as UBSAN
> is generating this check from the asm() in pvalidate() and after the
> LLVM fix, that check is no longer generated.
>
> It does seem fishy that __no_sanitize_undefined does not prevent the
> generation of that check... Plugging Linus's original reproducer from
> [2] into Compiler Explorer [3], it seems like __no_sanitize_undefined
> does get respected. It is my understanding that inlining functions that
> do not have attributes that disable instrumentation into ones that do is
> supposed to remove the instrumentation, correct? It seems like
> pvalidate() does get inlined into pvalidate_4k_page() but the
> instrumentation remains. Explicitly adding __no_sanitize_undefined to
> pvalidate() hides this for me.
>
Thanks for the analysis.
Should we perhaps just add the below? All the other sanitizers are
already disabled for core.o, which is the only object file being built
in this sub-directory.
--- a/arch/x86/coco/sev/Makefile
+++ b/arch/x86/coco/sev/Makefile
@@ -13,3 +13,5 @@ KCOV_INSTRUMENT_core.o := n
# With some compiler versions the generated code results in boot hangs, caused
# by several compilation units. To be safe, disable all instrumentation.
KCSAN_SANITIZE := n
+
+UBSAN_SANITIZE := n
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 0/7] x86: Rid .head.text of all abs references
2025-01-01 8:01 ` Ard Biesheuvel
@ 2025-01-01 10:39 ` Borislav Petkov
0 siblings, 0 replies; 24+ messages in thread
From: Borislav Petkov @ 2025-01-01 10:39 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Nathan Chancellor, clang-built-linux, Ard Biesheuvel,
linux-kernel, x86, Tom Lendacky, Thomas Gleixner, Ingo Molnar,
Dave Hansen, Andy Lutomirski, Arnd Bergmann, Kees Cook,
Brian Gerst, Kevin Loughlin, linux-toolchains
On Wed, Jan 01, 2025 at 09:01:49AM +0100, Ard Biesheuvel wrote:
> Thanks for the analysis.
Ditto.
> Should we perhaps just add the below? All the other sanitizers are
> already disabled for core.o, which is the only object file being built
> in this sub-directory.
Yes, please.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 1/7] x86/sev: Avoid WARN()s and panic()s in early boot code
2024-12-05 11:28 ` [PATCH v4 1/7] x86/sev: Avoid WARN()s and panic()s in early boot code Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
@ 2025-01-06 15:23 ` Tom Lendacky
2025-01-07 11:12 ` [tip: x86/boot] x86/sev: Don't hang but terminate on failure to remap SVSM CA tip-bot2 for Ard Biesheuvel
1 sibling, 1 reply; 24+ messages in thread
From: Tom Lendacky @ 2025-01-06 15:23 UTC (permalink / raw)
To: Ard Biesheuvel, linux-kernel
Cc: x86, Ard Biesheuvel, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, Andy Lutomirski, Arnd Bergmann,
Kees Cook, Brian Gerst, Kevin Loughlin
On 12/5/24 05:28, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
>
> Using WARN() or panic() while executing from the early 1:1 mapping is
> unlikely to do anything useful: the string literals are passed using
> their kernel virtual addresses which are not even mapped yet. But even
> if they were, calling into the printk() machinery from the early 1:1
> mapped code is not going to get very far.
>
> So drop the WARN()s entirely, and replace panic() with a deadloop.
>
> Link: https://lore.kernel.org/all/6904c198-9047-14bb-858e-38b531589379@amd.com/T/#u
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
> arch/x86/coco/sev/core.c | 15 +++++----------
> arch/x86/coco/sev/shared.c | 9 +++++----
> 2 files changed, 10 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
> index c5b0148b8c0a..499b41953e3c 100644
> --- a/arch/x86/coco/sev/core.c
> +++ b/arch/x86/coco/sev/core.c
> @@ -777,15 +777,10 @@ early_set_pages_state(unsigned long vaddr, unsigned long paddr,
>
> val = sev_es_rd_ghcb_msr();
>
> - if (WARN(GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP,
> - "Wrong PSC response code: 0x%x\n",
> - (unsigned int)GHCB_RESP_CODE(val)))
> + if (GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP)
> goto e_term;
>
> - if (WARN(GHCB_MSR_PSC_RESP_VAL(val),
> - "Failed to change page state to '%s' paddr 0x%lx error 0x%llx\n",
> - op == SNP_PAGE_STATE_PRIVATE ? "private" : "shared",
> - paddr, GHCB_MSR_PSC_RESP_VAL(val)))
> + if (GHCB_MSR_PSC_RESP_VAL(val))
> goto e_term;
>
> /* Page validation must be performed after changing to private */
> @@ -821,7 +816,7 @@ void __head early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
> early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_PRIVATE);
> }
>
> -void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
> +void __head early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
> unsigned long npages)
> {
> /*
> @@ -2361,8 +2356,8 @@ static __head void svsm_setup(struct cc_blob_sev_info *cc_info)
> call.rax = SVSM_CORE_CALL(SVSM_CORE_REMAP_CA);
> call.rcx = pa;
> ret = svsm_perform_call_protocol(&call);
> - if (ret)
> - panic("Can't remap the SVSM CA, ret=%d, rax_out=0x%llx\n", ret, call.rax_out);
> + while (ret)
> + cpu_relax(); /* too early to panic */
We should give some indication of the error and call sev_es_terminate()
here with a new reason code, something like:
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SVSM_CA_REMAP_FAIL);
Thanks,
Tom
>
> RIP_REL_REF(boot_svsm_caa) = (struct svsm_ca *)pa;
> RIP_REL_REF(boot_svsm_caa_pa) = pa;
> diff --git a/arch/x86/coco/sev/shared.c b/arch/x86/coco/sev/shared.c
> index 71de53194089..afb7ffc355fe 100644
> --- a/arch/x86/coco/sev/shared.c
> +++ b/arch/x86/coco/sev/shared.c
> @@ -1243,7 +1243,7 @@ static void svsm_pval_terminate(struct svsm_pvalidate_call *pc, int ret, u64 svs
> __pval_terminate(pfn, action, page_size, ret, svsm_ret);
> }
>
> -static void svsm_pval_4k_page(unsigned long paddr, bool validate)
> +static void __head svsm_pval_4k_page(unsigned long paddr, bool validate)
> {
> struct svsm_pvalidate_call *pc;
> struct svsm_call call = {};
> @@ -1275,12 +1275,13 @@ static void svsm_pval_4k_page(unsigned long paddr, bool validate)
>
> ret = svsm_perform_call_protocol(&call);
> if (ret)
> - svsm_pval_terminate(pc, ret, call.rax_out);
> + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
>
> native_local_irq_restore(flags);
> }
>
> -static void pvalidate_4k_page(unsigned long vaddr, unsigned long paddr, bool validate)
> +static void __head pvalidate_4k_page(unsigned long vaddr, unsigned long paddr,
> + bool validate)
> {
> int ret;
>
> @@ -1293,7 +1294,7 @@ static void pvalidate_4k_page(unsigned long vaddr, unsigned long paddr, bool val
> } else {
> ret = pvalidate(vaddr, RMP_PG_SIZE_4K, validate);
> if (ret)
> - __pval_terminate(PHYS_PFN(paddr), validate, RMP_PG_SIZE_4K, ret, 0);
> + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PVALIDATE);
> }
> }
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [tip: x86/boot] x86/sev: Don't hang but terminate on failure to remap SVSM CA
2025-01-06 15:23 ` [PATCH v4 1/7] " Tom Lendacky
@ 2025-01-07 11:12 ` tip-bot2 for Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: tip-bot2 for Ard Biesheuvel @ 2025-01-07 11:12 UTC (permalink / raw)
To: linux-tip-commits
Cc: Tom Lendacky, Ard Biesheuvel, Borislav Petkov (AMD), x86,
linux-kernel
The following commit has been merged into the x86/boot branch of tip:
Commit-ID: 893930143440eb5e3ea8f69cb51ab2e61e15c4e1
Gitweb: https://git.kernel.org/tip/893930143440eb5e3ea8f69cb51ab2e61e15c4e1
Author: Ard Biesheuvel <ardb@kernel.org>
AuthorDate: Mon, 06 Jan 2025 16:57:46 +01:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Tue, 07 Jan 2025 11:47:40 +01:00
x86/sev: Don't hang but terminate on failure to remap SVSM CA
Commit
09d35045cd0f ("x86/sev: Avoid WARN()s and panic()s in early boot code")
replaced a panic() that could potentially hit before the kernel is even
mapped with a deadloop, to ensure that execution does not proceed when the
condition in question hits.
As Tom suggests, it is better to terminate and return to the hypervisor
in this case, using a newly invented failure code to describe the
failure condition.
Suggested-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lore.kernel.org/all/9ce88603-20ca-e644-2d8a-aeeaf79cde69@amd.com
---
arch/x86/coco/sev/core.c | 4 ++--
arch/x86/include/asm/sev-common.h | 1 +
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
index 499b419..8689854 100644
--- a/arch/x86/coco/sev/core.c
+++ b/arch/x86/coco/sev/core.c
@@ -2356,8 +2356,8 @@ static __head void svsm_setup(struct cc_blob_sev_info *cc_info)
call.rax = SVSM_CORE_CALL(SVSM_CORE_REMAP_CA);
call.rcx = pa;
ret = svsm_perform_call_protocol(&call);
- while (ret)
- cpu_relax(); /* too early to panic */
+ if (ret)
+ sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SVSM_CA_REMAP_FAIL);
RIP_REL_REF(boot_svsm_caa) = (struct svsm_ca *)pa;
RIP_REL_REF(boot_svsm_caa_pa) = pa;
diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 50f5666..577b64d 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -206,6 +206,7 @@ struct snp_psc_desc {
#define GHCB_TERM_NO_SVSM 7 /* SVSM is not advertised in the secrets page */
#define GHCB_TERM_SVSM_VMPL0 8 /* SVSM is present but has set VMPL to 0 */
#define GHCB_TERM_SVSM_CAA 9 /* SVSM is present but CAA is not page aligned */
+#define GHCB_TERM_SVSM_CA_REMAP_FAIL 10 /* SVSM is present but CA could not be remapped */
#define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK)
^ permalink raw reply related [flat|nested] 24+ messages in thread
end of thread, other threads:[~2025-01-07 11:12 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-05 11:28 [PATCH v4 0/7] x86: Rid .head.text of all abs references Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 1/7] x86/sev: Avoid WARN()s and panic()s in early boot code Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2025-01-06 15:23 ` [PATCH v4 1/7] " Tom Lendacky
2025-01-07 11:12 ` [tip: x86/boot] x86/sev: Don't hang but terminate on failure to remap SVSM CA tip-bot2 for Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 2/7] x86/boot/64: Determine VA/PA offset before entering C code Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 3/7] x86/boot/64: Avoid intentional absolute symbol references in .head.text Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 4/7] x86/boot: Disable UBSAN in early boot code Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 5/7] x86/kernel: Move ENTRY_TEXT to the start of the image Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 6/7] x86/boot: Move .head.text into its own output section Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-05 11:28 ` [PATCH v4 7/7] x86/boot: Reject absolute references in .head.text Ard Biesheuvel
2024-12-05 12:28 ` [tip: x86/boot] " tip-bot2 for Ard Biesheuvel
2024-12-31 10:01 ` [PATCH v4 0/7] x86: Rid .head.text of all abs references Borislav Petkov
2024-12-31 10:12 ` Ard Biesheuvel
2024-12-31 10:35 ` Borislav Petkov
2024-12-31 19:29 ` Ard Biesheuvel
2025-01-01 2:43 ` Nathan Chancellor
2025-01-01 8:01 ` Ard Biesheuvel
2025-01-01 10:39 ` Borislav Petkov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox