* [RFC PATCH v6.10 0/4] x86: Rid .head.text of all abs references
@ 2024-03-07 14:30 Ard Biesheuvel
2024-03-07 14:30 ` [RFC PATCH v6.10 1/4] x86/sev: Avoid WARN()s in early boot code Ard Biesheuvel
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2024-03-07 14:30 UTC (permalink / raw)
To: linux-kernel
Cc: Ard Biesheuvel, Kevin Loughlin, Tom Lendacky, Dionna Glaze,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
Andy Lutomirski, Arnd Bergmann, Kees Cook, Brian Gerst,
linux-kernel
From: Ard Biesheuvel <ardb@kernel.org>
Questions below!
This series removes the last remaining absolute symbol references from
.head.text. Doing so is necessary because code in this section may be
called from a 1:1 mapping of memory, which deviates from the mapping
this code was linked and/or relocated to run at. This is not something
that the toolchains support: even PIC/PIE code is still assumed to
execute from the same mapping that it was relocated to run from by the
startup code or dynamic loader. This means we are basically on our own
here, and need to add measures to ensure the code works as expected in
this manner. (This work was inspired by boot problems on Clang-built
SEV-SNP guest kernels, where the confusion between RIP-relative and
absolute references was causing variable accesses to fault)
Given that the startup code needs to create the kernel virtual mapping
in the page tables, early references to some kernel virtual addresses
are valid even if they cannot be dereferenced yet. To avoid having to
make this distinction at build time, patches #3 and #4 replace such
valid references with RIP-relative references with an offset applied.
Patches #1 and #2 remove some absolute references from .head.text that
don't need to be there in the first place.
Questions:
- How can we police this at build time? Could we teach objtool to check
for absolute ELF relocations in .head.text, or does this belong in
modpost perhaps?
- Checking for absolute symbol references is not a complete solution, as
.head.text code could call into other code as well. Do we need rigid
checks for that too? Or could we have a soft rule that says you should
only call __head code from __head code?
Cc: Kevin Loughlin <kevinloughlin@google.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Dionna Glaze <dionnaglaze@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Kees Cook <keescook@chromium.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: linux-kernel@vger.kernel.org
Ard Biesheuvel (4):
x86/sev: Avoid WARN()s in early boot code
x86/xen/pvh: Move startup code into .ref.text
x86/boot/64: Determine VA/PA offset before entering C code
x86/boot/64: Avoid intentional absolute symbol references in
.head.text
arch/x86/include/asm/setup.h | 3 +-
arch/x86/kernel/head64.c | 38 ++++++++++++--------
arch/x86/kernel/head_64.S | 2 ++
arch/x86/kernel/sev.c | 15 +++-----
arch/x86/platform/pvh/head.S | 2 +-
5 files changed, 33 insertions(+), 27 deletions(-)
base-commit: 428080c9b19bfda37c478cd626dbd3851db1aff9
--
2.44.0.278.ge034bb2e1d-goog
^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC PATCH v6.10 1/4] x86/sev: Avoid WARN()s in early boot code
2024-03-07 14:30 [RFC PATCH v6.10 0/4] x86: Rid .head.text of all abs references Ard Biesheuvel
@ 2024-03-07 14:30 ` Ard Biesheuvel
2024-03-07 14:30 ` [RFC PATCH v6.10 2/4] x86/xen/pvh: Move startup code into .ref.text Ard Biesheuvel
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2024-03-07 14:30 UTC (permalink / raw)
To: linux-kernel
Cc: Ard Biesheuvel, Kevin Loughlin, Tom Lendacky, Dionna Glaze,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
Andy Lutomirski, Arnd Bergmann, Kees Cook, Brian Gerst,
linux-kernel
From: Ard Biesheuvel <ardb@kernel.org>
Using WARN() before the kernel is even mapped is unlikely to do anything
useful: the string literals are passed using their kernel virtual
addresses which are not even mapped yet. But even if they were, calling
into the printk machinery from the early 1:1 mapped code is not going to
get very far.
So drop the WARN()s entirely.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/kernel/sev.c | 15 +++++----------
1 file changed, 5 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 33c14aa1f06c..fc1b7b331815 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -700,7 +700,7 @@ early_set_pages_state(unsigned long vaddr, unsigned long paddr,
if (op == SNP_PAGE_STATE_SHARED) {
/* Page validation must be rescinded before changing to shared */
ret = pvalidate(vaddr, RMP_PG_SIZE_4K, false);
- if (WARN(ret, "Failed to validate address 0x%lx ret %d", paddr, ret))
+ if (ret)
goto e_term;
}
@@ -713,21 +713,16 @@ early_set_pages_state(unsigned long vaddr, unsigned long paddr,
val = sev_es_rd_ghcb_msr();
- if (WARN(GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP,
- "Wrong PSC response code: 0x%x\n",
- (unsigned int)GHCB_RESP_CODE(val)))
+ if (GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP)
goto e_term;
- if (WARN(GHCB_MSR_PSC_RESP_VAL(val),
- "Failed to change page state to '%s' paddr 0x%lx error 0x%llx\n",
- op == SNP_PAGE_STATE_PRIVATE ? "private" : "shared",
- paddr, GHCB_MSR_PSC_RESP_VAL(val)))
+ if (GHCB_MSR_PSC_RESP_VAL(val))
goto e_term;
if (op == SNP_PAGE_STATE_PRIVATE) {
/* Page validation must be performed after changing to private */
ret = pvalidate(vaddr, RMP_PG_SIZE_4K, true);
- if (WARN(ret, "Failed to validate address 0x%lx ret %d", paddr, ret))
+ if (ret)
goto e_term;
}
@@ -760,7 +755,7 @@ void __head early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
early_set_pages_state(vaddr, paddr, npages, SNP_PAGE_STATE_PRIVATE);
}
-void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+void __head early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
unsigned long npages)
{
/*
--
2.44.0.278.ge034bb2e1d-goog
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC PATCH v6.10 2/4] x86/xen/pvh: Move startup code into .ref.text
2024-03-07 14:30 [RFC PATCH v6.10 0/4] x86: Rid .head.text of all abs references Ard Biesheuvel
2024-03-07 14:30 ` [RFC PATCH v6.10 1/4] x86/sev: Avoid WARN()s in early boot code Ard Biesheuvel
@ 2024-03-07 14:30 ` Ard Biesheuvel
2024-03-07 14:30 ` [RFC PATCH v6.10 3/4] x86/boot/64: Determine VA/PA offset before entering C code Ard Biesheuvel
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2024-03-07 14:30 UTC (permalink / raw)
To: linux-kernel
Cc: Ard Biesheuvel, Kevin Loughlin, Tom Lendacky, Dionna Glaze,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
Andy Lutomirski, Arnd Bergmann, Kees Cook, Brian Gerst,
linux-kernel
From: Ard Biesheuvel <ardb@kernel.org>
The Xen PVH startup code does not need to live in .head.text, given that
its entry point is not at a fixed offset, and is communicated to the
host/VMM via an ELF note.
So move it out of .head.text into another code section. To avoid
spurious warnings about references to .init code, move it into .ref.text
rather than .text. (Note that the ELF note itself is not .init and so
moving this code into .init.text would result in warnings as well)
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/platform/pvh/head.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S
index f7235ef87bc3..0cf6008e834b 100644
--- a/arch/x86/platform/pvh/head.S
+++ b/arch/x86/platform/pvh/head.S
@@ -20,7 +20,7 @@
#include <asm/nospec-branch.h>
#include <xen/interface/elfnote.h>
- __HEAD
+ __REF
/*
* Entry point for PVH guests.
--
2.44.0.278.ge034bb2e1d-goog
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC PATCH v6.10 3/4] x86/boot/64: Determine VA/PA offset before entering C code
2024-03-07 14:30 [RFC PATCH v6.10 0/4] x86: Rid .head.text of all abs references Ard Biesheuvel
2024-03-07 14:30 ` [RFC PATCH v6.10 1/4] x86/sev: Avoid WARN()s in early boot code Ard Biesheuvel
2024-03-07 14:30 ` [RFC PATCH v6.10 2/4] x86/xen/pvh: Move startup code into .ref.text Ard Biesheuvel
@ 2024-03-07 14:30 ` Ard Biesheuvel
2024-03-07 14:30 ` [RFC PATCH v6.10 4/4] x86/boot/64: Avoid intentional absolute symbol references in .head.text Ard Biesheuvel
2024-03-07 14:42 ` [RFC PATCH v6.10 0/4] x86: Rid .head.text of all abs references Ard Biesheuvel
4 siblings, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2024-03-07 14:30 UTC (permalink / raw)
To: linux-kernel
Cc: Ard Biesheuvel, Kevin Loughlin, Tom Lendacky, Dionna Glaze,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
Andy Lutomirski, Arnd Bergmann, Kees Cook, Brian Gerst,
linux-kernel
From: Ard Biesheuvel <ardb@kernel.org>
We will start using an explicit virtual-to-physical offset in the early
1:1 mapped C code to derive the kernel virtual addresses of _text and
_end without having to rely on absolute symbol references, which should
be avoided in such code.
Currently, phys_base is used for this purpose, which is derived from the
kernel virtual address of _text, and this would lead to a circular
dependency. So instead, derive virtual-to-physical offset in asm code,
using the kernel VA of common_startup_64, which we already keep in a
global variable for other reasons.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/include/asm/setup.h | 3 ++-
arch/x86/kernel/head64.c | 8 +++++---
arch/x86/kernel/head_64.S | 2 ++
3 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index e61e68d71cba..cc1994516af2 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -47,7 +47,8 @@ extern unsigned long saved_video_mode;
extern void reserve_standard_io_resources(void);
extern void i386_reserve_resources(void);
-extern unsigned long __startup_64(unsigned long physaddr, struct boot_params *bp);
+extern unsigned long __startup_64(unsigned long physaddr, struct boot_params *bp,
+ unsigned long va_offset);
extern void startup_64_setup_gdt_idt(void);
extern void early_setup_idt(void);
extern void __init do_early_exception(struct pt_regs *regs, int trapnr);
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 212e8e06aeba..8fd80cf07691 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -131,10 +131,12 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdv
* doesn't have to generate PC-relative relocations when accessing globals from
* that function. Clang actually does not generate them, which leads to
* boot-time crashes. To work around this problem, every global pointer must
- * be accessed using RIP_REL_REF().
+ * be accessed using RIP_REL_REF(). Kernel virtual addresses can be determined
+ * by subtracting va_offset from the RIP-relative address.
*/
unsigned long __head __startup_64(unsigned long physaddr,
- struct boot_params *bp)
+ struct boot_params *bp,
+ unsigned long va_offset)
{
pmd_t (*early_pgts)[PTRS_PER_PMD] = RIP_REL_REF(early_dynamic_pgts);
unsigned long pgtable_flags;
@@ -156,7 +158,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
* Compute the delta between the address I am compiled to run at
* and the address I am actually running at.
*/
- load_delta = physaddr - (unsigned long)(_text - __START_KERNEL_map);
+ load_delta = __START_KERNEL_map + va_offset;
RIP_REL_REF(phys_base) = load_delta;
/* Is the address not 2M aligned? */
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 79f7c342e3da..3622744349d1 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -107,6 +107,8 @@ SYM_CODE_START_NOALIGN(startup_64)
*/
leaq _text(%rip), %rdi
movq %r15, %rsi
+ leaq common_startup_64(%rip), %rdx
+ subq 0f(%rip), %rdx
call __startup_64
/* Form the CR3 value being sure to include the CR3 modifier */
--
2.44.0.278.ge034bb2e1d-goog
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC PATCH v6.10 4/4] x86/boot/64: Avoid intentional absolute symbol references in .head.text
2024-03-07 14:30 [RFC PATCH v6.10 0/4] x86: Rid .head.text of all abs references Ard Biesheuvel
` (2 preceding siblings ...)
2024-03-07 14:30 ` [RFC PATCH v6.10 3/4] x86/boot/64: Determine VA/PA offset before entering C code Ard Biesheuvel
@ 2024-03-07 14:30 ` Ard Biesheuvel
2024-03-07 14:42 ` [RFC PATCH v6.10 0/4] x86: Rid .head.text of all abs references Ard Biesheuvel
4 siblings, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2024-03-07 14:30 UTC (permalink / raw)
To: linux-kernel
Cc: Ard Biesheuvel, Kevin Loughlin, Tom Lendacky, Dionna Glaze,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
Andy Lutomirski, Arnd Bergmann, Kees Cook, Brian Gerst,
linux-kernel
From: Ard Biesheuvel <ardb@kernel.org>
The code in .head.text executes from a 1:1 mapping and cannot generally
refer to global variables using their kernel virtual addresses. However,
there are some occurrences of such references that are valid: the kernel
virtual addresses of _text and _end are needed to populate the page
tables correctly, and some other section markers are used in a similar
way.
To avoid the need for making exceptions to the rule that .head.text must
not contain any absolute symbol references, derive these addresses from
the RIP-relative 1:1 mapped physical addresses, which can be safely
determined using RIP_REL_REF().
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/kernel/head64.c | 30 ++++++++++++--------
1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 8fd80cf07691..ce1a77e26ce3 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -84,9 +84,11 @@ static inline bool check_la57_support(void)
return true;
}
-static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdval_t *pmd)
+static unsigned long __head sme_postprocess_startup(struct boot_params *bp,
+ pmdval_t *pmd,
+ unsigned long va_offset)
{
- unsigned long vaddr, vaddr_end;
+ unsigned long paddr, paddr_end;
int i;
/* Encrypt the kernel and related (if SME is active) */
@@ -99,10 +101,10 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdv
* attribute.
*/
if (sme_get_me_mask()) {
- vaddr = (unsigned long)__start_bss_decrypted;
- vaddr_end = (unsigned long)__end_bss_decrypted;
+ paddr = (unsigned long)&RIP_REL_REF(__start_bss_decrypted);
+ paddr_end = (unsigned long)&RIP_REL_REF(__end_bss_decrypted);
- for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
+ for (; paddr < paddr_end; paddr += PMD_SIZE) {
/*
* On SNP, transition the page to shared in the RMP table so that
* it is consistent with the page table attribute change.
@@ -111,11 +113,11 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdv
* mapping (kernel .text). PVALIDATE, by way of
* early_snp_set_memory_shared(), requires a valid virtual
* address but the kernel is currently running off of the identity
- * mapping so use __pa() to get a *currently* valid virtual address.
+ * mapping so use the PA to get a *currently* valid virtual address.
*/
- early_snp_set_memory_shared(__pa(vaddr), __pa(vaddr), PTRS_PER_PMD);
+ early_snp_set_memory_shared(paddr, paddr, PTRS_PER_PMD);
- i = pmd_index(vaddr);
+ i = pmd_index(paddr - va_offset);
pmd[i] -= sme_get_me_mask();
}
}
@@ -139,6 +141,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
unsigned long va_offset)
{
pmd_t (*early_pgts)[PTRS_PER_PMD] = RIP_REL_REF(early_dynamic_pgts);
+ unsigned long va_text, va_end;
unsigned long pgtable_flags;
unsigned long load_delta;
pgdval_t *pgd;
@@ -165,6 +168,9 @@ unsigned long __head __startup_64(unsigned long physaddr,
if (load_delta & ~PMD_MASK)
for (;;);
+ va_text = physaddr - va_offset;
+ va_end = (unsigned long)&RIP_REL_REF(_end) - va_offset;
+
/* Include the SME encryption mask in the fixup value */
load_delta += sme_get_me_mask();
@@ -225,7 +231,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
pmd_entry += sme_get_me_mask();
pmd_entry += physaddr;
- for (i = 0; i < DIV_ROUND_UP(_end - _text, PMD_SIZE); i++) {
+ for (i = 0; i < DIV_ROUND_UP(va_end - va_text, PMD_SIZE); i++) {
int idx = i + (physaddr >> PMD_SHIFT);
pmd[idx % PTRS_PER_PMD] = pmd_entry + i * PMD_SIZE;
@@ -250,11 +256,11 @@ unsigned long __head __startup_64(unsigned long physaddr,
pmd = &RIP_REL_REF(level2_kernel_pgt)->pmd;
/* invalidate pages before the kernel image */
- for (i = 0; i < pmd_index((unsigned long)_text); i++)
+ for (i = 0; i < pmd_index(va_text); i++)
pmd[i] &= ~_PAGE_PRESENT;
/* fixup pages that are part of the kernel image */
- for (; i <= pmd_index((unsigned long)_end); i++)
+ for (; i <= pmd_index(va_end); i++)
if (pmd[i] & _PAGE_PRESENT)
pmd[i] += load_delta;
@@ -262,7 +268,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
for (; i < PTRS_PER_PMD; i++)
pmd[i] &= ~_PAGE_PRESENT;
- return sme_postprocess_startup(bp, pmd);
+ return sme_postprocess_startup(bp, pmd, va_offset);
}
/* Wipe all early page tables except for the kernel symbol map */
--
2.44.0.278.ge034bb2e1d-goog
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [RFC PATCH v6.10 0/4] x86: Rid .head.text of all abs references
2024-03-07 14:30 [RFC PATCH v6.10 0/4] x86: Rid .head.text of all abs references Ard Biesheuvel
` (3 preceding siblings ...)
2024-03-07 14:30 ` [RFC PATCH v6.10 4/4] x86/boot/64: Avoid intentional absolute symbol references in .head.text Ard Biesheuvel
@ 2024-03-07 14:42 ` Ard Biesheuvel
4 siblings, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2024-03-07 14:42 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Kevin Loughlin, Tom Lendacky, Dionna Glaze, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, Dave Hansen, Andy Lutomirski,
Arnd Bergmann, Kees Cook, Brian Gerst, linux-kernel
(remove bogus 'linux-kernel@google.com' from the To: line)
On Thu, 7 Mar 2024 at 15:30, Ard Biesheuvel <ardb+git@google.com> wrote:
>
> From: Ard Biesheuvel <ardb@kernel.org>
>
> Questions below!
>
> This series removes the last remaining absolute symbol references from
> .head.text. Doing so is necessary because code in this section may be
> called from a 1:1 mapping of memory, which deviates from the mapping
> this code was linked and/or relocated to run at. This is not something
> that the toolchains support: even PIC/PIE code is still assumed to
> execute from the same mapping that it was relocated to run from by the
> startup code or dynamic loader. This means we are basically on our own
> here, and need to add measures to ensure the code works as expected in
> this manner. (This work was inspired by boot problems on Clang-built
> SEV-SNP guest kernels, where the confusion between RIP-relative and
> absolute references was causing variable accesses to fault)
>
> Given that the startup code needs to create the kernel virtual mapping
> in the page tables, early references to some kernel virtual addresses
> are valid even if they cannot be dereferenced yet. To avoid having to
> make this distinction at build time, patches #3 and #4 replace such
> valid references with RIP-relative references with an offset applied.
>
> Patches #1 and #2 remove some absolute references from .head.text that
> don't need to be there in the first place.
>
> Questions:
> - How can we police this at build time? Could we teach objtool to check
> for absolute ELF relocations in .head.text, or does this belong in
> modpost perhaps?
>
> - Checking for absolute symbol references is not a complete solution, as
> .head.text code could call into other code as well. Do we need rigid
> checks for that too? Or could we have a soft rule that says you should
> only call __head code from __head code?
>
> Cc: Kevin Loughlin <kevinloughlin@google.com>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Dionna Glaze <dionnaglaze@google.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Brian Gerst <brgerst@gmail.com>
> Cc: linux-kernel@vger.kernel.org
>
> Ard Biesheuvel (4):
> x86/sev: Avoid WARN()s in early boot code
> x86/xen/pvh: Move startup code into .ref.text
> x86/boot/64: Determine VA/PA offset before entering C code
> x86/boot/64: Avoid intentional absolute symbol references in
> .head.text
>
> arch/x86/include/asm/setup.h | 3 +-
> arch/x86/kernel/head64.c | 38 ++++++++++++--------
> arch/x86/kernel/head_64.S | 2 ++
> arch/x86/kernel/sev.c | 15 +++-----
> arch/x86/platform/pvh/head.S | 2 +-
> 5 files changed, 33 insertions(+), 27 deletions(-)
>
>
> base-commit: 428080c9b19bfda37c478cd626dbd3851db1aff9
> --
> 2.44.0.278.ge034bb2e1d-goog
>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-03-07 14:42 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-07 14:30 [RFC PATCH v6.10 0/4] x86: Rid .head.text of all abs references Ard Biesheuvel
2024-03-07 14:30 ` [RFC PATCH v6.10 1/4] x86/sev: Avoid WARN()s in early boot code Ard Biesheuvel
2024-03-07 14:30 ` [RFC PATCH v6.10 2/4] x86/xen/pvh: Move startup code into .ref.text Ard Biesheuvel
2024-03-07 14:30 ` [RFC PATCH v6.10 3/4] x86/boot/64: Determine VA/PA offset before entering C code Ard Biesheuvel
2024-03-07 14:30 ` [RFC PATCH v6.10 4/4] x86/boot/64: Avoid intentional absolute symbol references in .head.text Ard Biesheuvel
2024-03-07 14:42 ` [RFC PATCH v6.10 0/4] x86: Rid .head.text of all abs references Ard Biesheuvel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox