* [PATCH v2 1/3] efi/x86: skip efi_arch_mem_reserve() in case of kexec.
2024-03-18 7:02 ` [PATCH v2 0/3] x86/snp: " Ashish Kalra
@ 2024-03-18 7:02 ` Ashish Kalra
2024-03-19 4:00 ` Dave Young
2024-03-18 7:02 ` [PATCH v2 2/3] x86/mm: Do not zap page table entries mapping unaccepted memory table during kdump Ashish Kalra
2024-03-18 7:02 ` [PATCH v2 3/3] x86/snp: Convert shared memory back to private on kexec Ashish Kalra
2 siblings, 1 reply; 17+ messages in thread
From: Ashish Kalra @ 2024-03-18 7:02 UTC (permalink / raw)
To: tglx, mingo, dave.hansen
Cc: rafael, peterz, adrian.hunter, sathyanarayanan.kuppuswamy,
elena.reshetova, jun.nakajima, rick.p.edgecombe, thomas.lendacky,
seanjc, michael.roth, kai.huang, bhe, kexec, linux-coco,
linux-kernel, kirill.shutemov, bdas, vkuznets, dionnaglaze,
anisinha, jroedel
From: Ashish Kalra <ashish.kalra@amd.com>
For kexec use case, need to use and stick to the EFI memmap passed
from the first kernel via boot-params/setup data, hence,
skip efi_arch_mem_reserve() during kexec.
Additionally during SNP guest kexec testing discovered that EFI memmap
is corrupted during chained kexec. kexec_enter_virtual_mode() during
late init will remap the efi_memmap physical pages allocated in
efi_arch_mem_reserve() via memboot & then subsequently cause random
EFI memmap corruption once memblock is freed/teared-down.
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
arch/x86/platform/efi/quirks.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
index f0cc00032751..d4562d074371 100644
--- a/arch/x86/platform/efi/quirks.c
+++ b/arch/x86/platform/efi/quirks.c
@@ -258,6 +258,16 @@ void __init efi_arch_mem_reserve(phys_addr_t addr, u64 size)
int num_entries;
void *new;
+ /*
+ * For kexec use case, we need to use the EFI memmap passed from the first
+ * kernel via setup data, so we need to skip this.
+ * Additionally kexec_enter_virtual_mode() during late init will remap
+ * the efi_memmap physical pages allocated here via memboot & then
+ * subsequently cause random EFI memmap corruption once memblock is freed.
+ */
+ if (efi_setup)
+ return;
+
if (efi_mem_desc_lookup(addr, &md) ||
md.type != EFI_BOOT_SERVICES_DATA) {
pr_err("Failed to lookup EFI memory descriptor for %pa\n", &addr);
--
2.34.1
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 17+ messages in thread* Re: [PATCH v2 1/3] efi/x86: skip efi_arch_mem_reserve() in case of kexec.
2024-03-18 7:02 ` [PATCH v2 1/3] efi/x86: skip efi_arch_mem_reserve() in case of kexec Ashish Kalra
@ 2024-03-19 4:00 ` Dave Young
2024-03-24 22:32 ` Kalra, Ashish
0 siblings, 1 reply; 17+ messages in thread
From: Dave Young @ 2024-03-19 4:00 UTC (permalink / raw)
To: Ashish Kalra
Cc: tglx, mingo, dave.hansen, rafael, peterz, adrian.hunter,
sathyanarayanan.kuppuswamy, elena.reshetova, jun.nakajima,
rick.p.edgecombe, thomas.lendacky, seanjc, michael.roth,
kai.huang, bhe, kexec, linux-coco, linux-kernel, kirill.shutemov,
bdas, vkuznets, dionnaglaze, anisinha, jroedel, Ard Biesheuvel
Hi,
Added Ard in cc.
On 03/18/24 at 07:02am, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
>
> For kexec use case, need to use and stick to the EFI memmap passed
> from the first kernel via boot-params/setup data, hence,
> skip efi_arch_mem_reserve() during kexec.
>
> Additionally during SNP guest kexec testing discovered that EFI memmap
> is corrupted during chained kexec. kexec_enter_virtual_mode() during
> late init will remap the efi_memmap physical pages allocated in
> efi_arch_mem_reserve() via memboot & then subsequently cause random
> EFI memmap corruption once memblock is freed/teared-down.
>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
> arch/x86/platform/efi/quirks.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
> index f0cc00032751..d4562d074371 100644
> --- a/arch/x86/platform/efi/quirks.c
> +++ b/arch/x86/platform/efi/quirks.c
> @@ -258,6 +258,16 @@ void __init efi_arch_mem_reserve(phys_addr_t addr, u64 size)
> int num_entries;
> void *new;
>
> + /*
> + * For kexec use case, we need to use the EFI memmap passed from the first
> + * kernel via setup data, so we need to skip this.
> + * Additionally kexec_enter_virtual_mode() during late init will remap
> + * the efi_memmap physical pages allocated here via memboot & then
> + * subsequently cause random EFI memmap corruption once memblock is freed.
Can you elaborate a bit about the corruption, is it reproducible without
SNP?
> + */
> + if (efi_setup)
> + return;
> +
How about checking the md attribute instead of checking the efi_setup,
personally I feel it a bit better, something like below:
diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
index f0cc00032751..699332b075bb 100644
--- a/arch/x86/platform/efi/quirks.c
+++ b/arch/x86/platform/efi/quirks.c
@@ -255,15 +255,24 @@ void __init efi_arch_mem_reserve(phys_addr_t addr, u64 size)
struct efi_memory_map_data data = { 0 };
struct efi_mem_range mr;
efi_memory_desc_t md;
- int num_entries;
+ int num_entries, ret;
void *new;
- if (efi_mem_desc_lookup(addr, &md) ||
- md.type != EFI_BOOT_SERVICES_DATA) {
+ ret = efi_mem_desc_lookup(addr, &md);
+ if (ret) {
pr_err("Failed to lookup EFI memory descriptor for %pa\n", &addr);
return;
}
+ if (md.type != EFI_BOOT_SERVICES_DATA) {
+ pr_err("Skil reserving non EFI Boot Service Data memory for %pa\n", &addr);
+ return;
+ }
+
+ /* Kexec copied the efi memmap from the 1st kernel, thus skip the case. */
+ if (md.attribute & EFI_MEMORY_RUNTIME)
+ return;
+
if (addr + size > md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT)) {
pr_err("Region spans EFI memory descriptors, %pa\n", &addr);
return;
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 17+ messages in thread* Re: [PATCH v2 1/3] efi/x86: skip efi_arch_mem_reserve() in case of kexec.
2024-03-19 4:00 ` Dave Young
@ 2024-03-24 22:32 ` Kalra, Ashish
0 siblings, 0 replies; 17+ messages in thread
From: Kalra, Ashish @ 2024-03-24 22:32 UTC (permalink / raw)
To: Dave Young
Cc: tglx, mingo, dave.hansen, rafael, peterz, adrian.hunter,
sathyanarayanan.kuppuswamy, elena.reshetova, jun.nakajima,
rick.p.edgecombe, thomas.lendacky, seanjc, michael.roth,
kai.huang, bhe, kexec, linux-coco, linux-kernel, kirill.shutemov,
bdas, vkuznets, dionnaglaze, anisinha, jroedel, Ard Biesheuvel
Hello,
On 3/18/2024 11:00 PM, Dave Young wrote:
> Hi,
>
> Added Ard in cc.
>
> On 03/18/24 at 07:02am, Ashish Kalra wrote:
>> From: Ashish Kalra <ashish.kalra@amd.com>
>>
>> For kexec use case, need to use and stick to the EFI memmap passed
>> from the first kernel via boot-params/setup data, hence,
>> skip efi_arch_mem_reserve() during kexec.
>>
>> Additionally during SNP guest kexec testing discovered that EFI memmap
>> is corrupted during chained kexec. kexec_enter_virtual_mode() during
>> late init will remap the efi_memmap physical pages allocated in
>> efi_arch_mem_reserve() via memboot & then subsequently cause random
>> EFI memmap corruption once memblock is freed/teared-down.
>>
>> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
>> ---
>> arch/x86/platform/efi/quirks.c | 10 ++++++++++
>> 1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
>> index f0cc00032751..d4562d074371 100644
>> --- a/arch/x86/platform/efi/quirks.c
>> +++ b/arch/x86/platform/efi/quirks.c
>> @@ -258,6 +258,16 @@ void __init efi_arch_mem_reserve(phys_addr_t addr, u64 size)
>> int num_entries;
>> void *new;
>>
>> + /*
>> + * For kexec use case, we need to use the EFI memmap passed from the first
>> + * kernel via setup data, so we need to skip this.
>> + * Additionally kexec_enter_virtual_mode() during late init will remap
>> + * the efi_memmap physical pages allocated here via memboot & then
>> + * subsequently cause random EFI memmap corruption once memblock is freed.
> Can you elaborate a bit about the corruption, is it reproducible without
> SNP?
This is only reproducible on SNP.
This is the call-stack for the above function:
[ 0.313377] efi_arch_mem_reserve+0x64/0x220^M
[ 0.314060] ? memblock_add_range+0x2a0/0x2e0^M
[ 0.314763] efi_mem_reserve+0x36/0x60^M
[ 0.315360] efi_bgrt_init+0x17d/0x1a0^M
[ 0.315959] ? __pfx_acpi_parse_bgrt+0x10/0x10^M
[ 0.316711] acpi_parse_bgrt+0x12/0x20^M
[ 0.317310] acpi_table_parse+0x77/0xd0^M
[ 0.317922] acpi_boot_init+0x362/0x630^M
[ 0.318535] setup_arch+0xa4e/0xf90^M
[ 0.319091] start_kernel+0x68/0xa70^M
[ 0.319664] x86_64_start_reservations+0x1c/0x30^M
[ 0.320431] x86_64_start_kernel+0xbf/0x110^M
[ 0.321099] secondary_startup_64_no_verify+0x179/0x17b^M
This function efi_arch_mem_reserve() calls efi_memmap_alloc() which in
turn calls __efi_memmap_alloc_early() which does memblock_phys_alloc(),
and later does efi_memmap_install() which does early_memremap() of the
EFI memmap into this memblock allocated physical memory. So now EFI
memmap gets re-mapped into the memblock allocated memory.
Later kexec_enter_virtual_mode() calls efi_memmap_init_late() which
memremap()'s the EFI memmap into the above memblock allocated physical
range.
Obviously, when memblocks are later freed during late init, this
memblock allocated physical range will get freed and re-allocated which
will eventually overwrite and corrupt the EFI memmap leading to
subsequent kexec boot crash.
>> + */
>> + if (efi_setup)
>> + return;
>> +
> How about checking the md attribute instead of checking the efi_setup,
> personally I feel it a bit better, something like below:
I based the above on the following code checking for kexec boot:
void __init efi_enter_virtual_mode(void)
{
...
if (efi_setup)
kexec_enter_virtual_mode();
else
__efi_enter_virtual_mode();
But, i have tested with the code (you shared below) about checking the
md attribute and it works, so i can resend my v2 patch based on this.
Thanks, Ashish
>
> diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
> index f0cc00032751..699332b075bb 100644
> --- a/arch/x86/platform/efi/quirks.c
> +++ b/arch/x86/platform/efi/quirks.c
> @@ -255,15 +255,24 @@ void __init efi_arch_mem_reserve(phys_addr_t addr, u64 size)
> struct efi_memory_map_data data = { 0 };
> struct efi_mem_range mr;
> efi_memory_desc_t md;
> - int num_entries;
> + int num_entries, ret;
> void *new;
>
> - if (efi_mem_desc_lookup(addr, &md) ||
> - md.type != EFI_BOOT_SERVICES_DATA) {
> + ret = efi_mem_desc_lookup(addr, &md);
> + if (ret) {
> pr_err("Failed to lookup EFI memory descriptor for %pa\n", &addr);
> return;
> }
>
> + if (md.type != EFI_BOOT_SERVICES_DATA) {
> + pr_err("Skil reserving non EFI Boot Service Data memory for %pa\n", &addr);
> + return;
> + }
> +
> + /* Kexec copied the efi memmap from the 1st kernel, thus skip the case. */
> + if (md.attribute & EFI_MEMORY_RUNTIME)
> + return;
> +
> if (addr + size > md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT)) {
> pr_err("Region spans EFI memory descriptors, %pa\n", &addr);
> return;
>
>
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 2/3] x86/mm: Do not zap page table entries mapping unaccepted memory table during kdump.
2024-03-18 7:02 ` [PATCH v2 0/3] x86/snp: " Ashish Kalra
2024-03-18 7:02 ` [PATCH v2 1/3] efi/x86: skip efi_arch_mem_reserve() in case of kexec Ashish Kalra
@ 2024-03-18 7:02 ` Ashish Kalra
2024-03-21 14:58 ` Kirill A. Shutemov
2024-03-18 7:02 ` [PATCH v2 3/3] x86/snp: Convert shared memory back to private on kexec Ashish Kalra
2 siblings, 1 reply; 17+ messages in thread
From: Ashish Kalra @ 2024-03-18 7:02 UTC (permalink / raw)
To: tglx, mingo, dave.hansen
Cc: rafael, peterz, adrian.hunter, sathyanarayanan.kuppuswamy,
elena.reshetova, jun.nakajima, rick.p.edgecombe, thomas.lendacky,
seanjc, michael.roth, kai.huang, bhe, kexec, linux-coco,
linux-kernel, kirill.shutemov, bdas, vkuznets, dionnaglaze,
anisinha, jroedel
From: Ashish Kalra <ashish.kalra@amd.com>
During crashkernel boot only pre-allocated crash memory is presented as
E820_TYPE_RAM. This can cause page table entries mapping unaccepted memory
table to be zapped during phys_pte_init(), phys_pmd_init(), phys_pud_init()
and phys_p4d_init() as SNP/TDX guest use E820_TYPE_ACPI to store the
unaccepted memory table and pass it between the kernels on
kexec/kdump.
E820_TYPE_ACPI covers not only ACPI data, but also EFI tables and might
be required by kernel to function properly.
The problem was discovered during debugging kdump for SNP guest. The
unaccepted memory table stored with E820_TYPE_ACPI and passed between
the kernels on kdump was getting zapped as the PMD entry mapping this
is above the E820_TYPE_RAM range for the reserved crashkernel memory.
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
arch/x86/mm/init_64.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index a0dffaca6d2b..cc294a9e9fd7 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -469,7 +469,9 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, unsigned long paddr_end,
!e820__mapped_any(paddr & PAGE_MASK, paddr_next,
E820_TYPE_RAM) &&
!e820__mapped_any(paddr & PAGE_MASK, paddr_next,
- E820_TYPE_RESERVED_KERN))
+ E820_TYPE_RESERVED_KERN) &&
+ !e820__mapped_any(paddr & PAGE_MASK, paddr_next,
+ E820_TYPE_ACPI))
set_pte_init(pte, __pte(0), init);
continue;
}
@@ -524,7 +526,9 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
!e820__mapped_any(paddr & PMD_MASK, paddr_next,
E820_TYPE_RAM) &&
!e820__mapped_any(paddr & PMD_MASK, paddr_next,
- E820_TYPE_RESERVED_KERN))
+ E820_TYPE_RESERVED_KERN) &&
+ !e820__mapped_any(paddr & PMD_MASK, paddr_next,
+ E820_TYPE_ACPI))
set_pmd_init(pmd, __pmd(0), init);
continue;
}
@@ -611,7 +615,9 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
!e820__mapped_any(paddr & PUD_MASK, paddr_next,
E820_TYPE_RAM) &&
!e820__mapped_any(paddr & PUD_MASK, paddr_next,
- E820_TYPE_RESERVED_KERN))
+ E820_TYPE_RESERVED_KERN) &&
+ !e820__mapped_any(paddr & PUD_MASK, paddr_next,
+ E820_TYPE_ACPI))
set_pud_init(pud, __pud(0), init);
continue;
}
@@ -698,7 +704,9 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
!e820__mapped_any(paddr & P4D_MASK, paddr_next,
E820_TYPE_RAM) &&
!e820__mapped_any(paddr & P4D_MASK, paddr_next,
- E820_TYPE_RESERVED_KERN))
+ E820_TYPE_RESERVED_KERN) &&
+ !e820__mapped_any(paddr & P4D_MASK, paddr_next,
+ E820_TYPE_ACPI))
set_p4d_init(p4d, __p4d(0), init);
continue;
}
--
2.34.1
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v2 2/3] x86/mm: Do not zap page table entries mapping unaccepted memory table during kdump.
2024-03-18 7:02 ` [PATCH v2 2/3] x86/mm: Do not zap page table entries mapping unaccepted memory table during kdump Ashish Kalra
@ 2024-03-21 14:58 ` Kirill A. Shutemov
0 siblings, 0 replies; 17+ messages in thread
From: Kirill A. Shutemov @ 2024-03-21 14:58 UTC (permalink / raw)
To: Ashish Kalra
Cc: tglx, mingo, dave.hansen, rafael, peterz, adrian.hunter,
sathyanarayanan.kuppuswamy, elena.reshetova, jun.nakajima,
rick.p.edgecombe, thomas.lendacky, seanjc, michael.roth,
kai.huang, bhe, kexec, linux-coco, linux-kernel, bdas, vkuznets,
dionnaglaze, anisinha, jroedel
On Mon, Mar 18, 2024 at 07:02:45AM +0000, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
>
> During crashkernel boot only pre-allocated crash memory is presented as
> E820_TYPE_RAM. This can cause page table entries mapping unaccepted memory
> table to be zapped during phys_pte_init(), phys_pmd_init(), phys_pud_init()
> and phys_p4d_init() as SNP/TDX guest use E820_TYPE_ACPI to store the
> unaccepted memory table and pass it between the kernels on
> kexec/kdump.
>
> E820_TYPE_ACPI covers not only ACPI data, but also EFI tables and might
> be required by kernel to function properly.
>
> The problem was discovered during debugging kdump for SNP guest. The
> unaccepted memory table stored with E820_TYPE_ACPI and passed between
> the kernels on kdump was getting zapped as the PMD entry mapping this
> is above the E820_TYPE_RAM range for the reserved crashkernel memory.
>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
I guess it would be better if I take this patch into my kexec patchset. I
guess I just got lucky not to step onto the issue.
--
Kiryl Shutsemau / Kirill A. Shutemov
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 3/3] x86/snp: Convert shared memory back to private on kexec
2024-03-18 7:02 ` [PATCH v2 0/3] x86/snp: " Ashish Kalra
2024-03-18 7:02 ` [PATCH v2 1/3] efi/x86: skip efi_arch_mem_reserve() in case of kexec Ashish Kalra
2024-03-18 7:02 ` [PATCH v2 2/3] x86/mm: Do not zap page table entries mapping unaccepted memory table during kdump Ashish Kalra
@ 2024-03-18 7:02 ` Ashish Kalra
2 siblings, 0 replies; 17+ messages in thread
From: Ashish Kalra @ 2024-03-18 7:02 UTC (permalink / raw)
To: tglx, mingo, dave.hansen
Cc: rafael, peterz, adrian.hunter, sathyanarayanan.kuppuswamy,
elena.reshetova, jun.nakajima, rick.p.edgecombe, thomas.lendacky,
seanjc, michael.roth, kai.huang, bhe, kexec, linux-coco,
linux-kernel, kirill.shutemov, bdas, vkuznets, dionnaglaze,
anisinha, jroedel
From: Ashish Kalra <ashish.kalra@amd.com>
SNP guests allocate shared buffers to perform I/O. It is done by
allocating pages normally from the buddy allocator and converting them
to shared with set_memory_decrypted().
The second kernel has no idea what memory is converted this way. It only
sees E820_TYPE_RAM.
Accessing shared memory via private mapping will cause unrecoverable RMP
page-faults.
On kexec walk direct mapping and convert all shared memory back to
private. It makes all RAM private again and second kernel may use it
normally. Additionally for SNP guests convert all bss decrypted section
pages back to private and switch back ROM regions to shared so that
their revalidation does not fail during kexec kernel boot.
The conversion occurs in two steps: stopping new conversions and
unsharing all memory. In the case of normal kexec, the stopping of
conversions takes place while scheduling is still functioning. This
allows for waiting until any ongoing conversions are finished. The
second step is carried out when all CPUs except one are inactive and
interrupts are disabled. This prevents any conflicts with code that may
access shared memory.
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
arch/x86/include/asm/probe_roms.h | 1 +
arch/x86/include/asm/sev.h | 4 +
arch/x86/kernel/probe_roms.c | 16 +++
arch/x86/kernel/sev.c | 169 ++++++++++++++++++++++++++++++
arch/x86/mm/mem_encrypt_amd.c | 3 +
5 files changed, 193 insertions(+)
diff --git a/arch/x86/include/asm/probe_roms.h b/arch/x86/include/asm/probe_roms.h
index 1c7f3815bbd6..d50b67dbff33 100644
--- a/arch/x86/include/asm/probe_roms.h
+++ b/arch/x86/include/asm/probe_roms.h
@@ -6,4 +6,5 @@ struct pci_dev;
extern void __iomem *pci_map_biosrom(struct pci_dev *pdev);
extern void pci_unmap_biosrom(void __iomem *rom);
extern size_t pci_biosrom_size(struct pci_dev *pdev);
+extern void snp_kexec_unprep_rom_memory(void);
#endif
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index d7b27cb34c2b..867518b9bcad 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -229,6 +229,8 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end);
u64 snp_get_unsupported_features(u64 status);
u64 sev_get_status(void);
void kdump_sev_callback(void);
+void snp_kexec_unshare_mem(void);
+void snp_kexec_stop_conversion(bool crash);
#else
static inline void sev_es_ist_enter(struct pt_regs *regs) { }
static inline void sev_es_ist_exit(void) { }
@@ -258,6 +260,8 @@ static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { }
static inline u64 snp_get_unsupported_features(u64 status) { return 0; }
static inline u64 sev_get_status(void) { return 0; }
static inline void kdump_sev_callback(void) { }
+void snp_kexec_unshare_mem(void) {}
+static void snp_kexec_stop_conversion(bool crash) {}
#endif
#ifdef CONFIG_KVM_AMD_SEV
diff --git a/arch/x86/kernel/probe_roms.c b/arch/x86/kernel/probe_roms.c
index 319fef37d9dc..457f1e5c8d00 100644
--- a/arch/x86/kernel/probe_roms.c
+++ b/arch/x86/kernel/probe_roms.c
@@ -177,6 +177,22 @@ size_t pci_biosrom_size(struct pci_dev *pdev)
}
EXPORT_SYMBOL(pci_biosrom_size);
+void snp_kexec_unprep_rom_memory(void)
+{
+ unsigned long vaddr, npages, sz;
+
+ /*
+ * Switch back ROM regions to shared so that their validation
+ * does not fail during kexec kernel boot.
+ */
+ vaddr = (unsigned long)__va(video_rom_resource.start);
+ sz = (system_rom_resource.end + 1) - video_rom_resource.start;
+ npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
+
+ snp_set_memory_shared(vaddr, npages);
+}
+EXPORT_SYMBOL(snp_kexec_unprep_rom_memory);
+
#define ROMSIGNATURE 0xaa55
static int __init romsignature(const unsigned char *rom)
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 1ef7ae806a01..7443a9620a31 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -40,6 +40,7 @@
#include <asm/apic.h>
#include <asm/cpuid.h>
#include <asm/cmdline.h>
+#include <asm/probe_roms.h>
#define DR7_RESET_VALUE 0x400
@@ -71,6 +72,9 @@ static struct ghcb *boot_ghcb __section(".data");
/* Bitmap of SEV features supported by the hypervisor */
static u64 sev_hv_features __ro_after_init;
+/* Last address to be switched to private during kexec */
+static unsigned long kexec_last_addr_to_make_private;
+
/* #VC handler runtime per-CPU data */
struct sev_es_runtime_data {
struct ghcb ghcb_page;
@@ -906,6 +910,171 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end)
set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE);
}
+static bool set_pte_enc(pte_t *kpte, int level, void *va)
+{
+ pte_t new_pte;
+
+ if (pte_none(*kpte))
+ return false;
+
+ /*
+ * Change the physical page attribute from C=0 to C=1. Flush the
+ * caches to ensure that data gets accessed with the correct C-bit.
+ */
+ if (pte_present(*kpte))
+ clflush_cache_range(va, page_level_size(level));
+
+ new_pte = __pte(cc_mkenc(pte_val(*kpte)));
+ set_pte_atomic(kpte, new_pte);
+
+ return true;
+}
+
+static bool make_pte_private(pte_t *pte, unsigned long addr, int pages, int level)
+{
+ struct sev_es_runtime_data *data;
+ struct ghcb *ghcb;
+
+ data = this_cpu_read(runtime_data);
+ ghcb = &data->ghcb_page;
+
+ /* Check for GHCB for being part of a PMD range. */
+ if ((unsigned long)ghcb >= addr &&
+ (unsigned long)ghcb <= (addr + (pages * PAGE_SIZE))) {
+ /*
+ * Ensure that the current cpu's GHCB is made private
+ * at the end of unshared loop so that we continue to use the
+ * optimized GHCB protocol and not force the switch to
+ * MSR protocol till the very end.
+ */
+ pr_debug("setting boot_ghcb to NULL for this cpu ghcb\n");
+ kexec_last_addr_to_make_private = addr;
+ return true;
+ }
+
+ if (!set_pte_enc(pte, level, (void *)addr))
+ return false;
+
+ snp_set_memory_private(addr, pages);
+
+ return true;
+}
+
+static void unshare_all_memory(void)
+{
+ unsigned long addr, end;
+
+ /*
+ * Walk direct mapping and convert all shared memory back to private,
+ */
+
+ addr = PAGE_OFFSET;
+ end = PAGE_OFFSET + get_max_mapped();
+
+ while (addr < end) {
+ unsigned long size;
+ unsigned int level;
+ pte_t *pte;
+
+ pte = lookup_address(addr, &level);
+ size = page_level_size(level);
+
+ /*
+ * pte_none() check is required to skip physical memory holes in direct mapped.
+ */
+ if (pte && pte_decrypted(*pte) && !pte_none(*pte)) {
+ int pages = size / PAGE_SIZE;
+
+ if (!make_pte_private(pte, addr, pages, level)) {
+ pr_err("Failed to unshare range %#lx-%#lx\n",
+ addr, addr + size);
+ }
+
+ }
+
+ addr += size;
+ }
+ __flush_tlb_all();
+
+}
+
+static void unshare_all_bss_decrypted_memory(void)
+{
+ unsigned long vaddr, vaddr_end;
+ unsigned long size;
+ unsigned int level;
+ unsigned int npages;
+ pte_t *pte;
+
+ vaddr = (unsigned long)__start_bss_decrypted;
+ vaddr_end = (unsigned long)__start_bss_decrypted_unused;
+ npages = (vaddr_end - vaddr) >> PAGE_SHIFT;
+ for (; vaddr < vaddr_end; vaddr += PAGE_SIZE) {
+ pte = lookup_address(vaddr, &level);
+ if (!pte || !pte_decrypted(*pte) || pte_none(*pte))
+ continue;
+
+ size = page_level_size(level);
+ set_pte_enc(pte, level, (void *)vaddr);
+ }
+ vaddr = (unsigned long)__start_bss_decrypted;
+ snp_set_memory_private(vaddr, npages);
+}
+
+/* Stop new private<->shared conversions */
+void snp_kexec_stop_conversion(bool crash)
+{
+ /*
+ * Crash kernel reaches here with interrupts disabled: can't wait for
+ * conversions to finish.
+ *
+ * If race happened, just report and proceed.
+ */
+ bool wait_for_lock = !crash;
+
+ if (!stop_memory_enc_conversion(wait_for_lock))
+ pr_warn("Failed to finish shared<->private conversions\n");
+}
+
+void snp_kexec_unshare_mem(void)
+{
+ if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
+ return;
+
+ /*
+ * Switch back any specific memory regions such as option
+ * ROM regions back to shared so that (re)validation does
+ * not fail when kexec kernel boots.
+ */
+ snp_kexec_unprep_rom_memory();
+
+ unshare_all_memory();
+
+ unshare_all_bss_decrypted_memory();
+
+ if (kexec_last_addr_to_make_private) {
+ unsigned long size;
+ unsigned int level;
+ pte_t *pte;
+
+ /*
+ * Switch to using the MSR protocol to change this cpu's
+ * GHCB to private.
+ * All the per-cpu GHCBs have been switched back to private,
+ * so can't do any more GHCB calls to the hypervisor beyond
+ * this point till the kexec kernel starts running.
+ */
+ boot_ghcb = NULL;
+ sev_cfg.ghcbs_initialized = false;
+
+ pr_debug("boot ghcb 0x%lx\n", kexec_last_addr_to_make_private);
+ pte = lookup_address(kexec_last_addr_to_make_private, &level);
+ size = page_level_size(level);
+ set_pte_enc(pte, level, (void *)kexec_last_addr_to_make_private);
+ snp_set_memory_private(kexec_last_addr_to_make_private, (size / PAGE_SIZE));
+ }
+}
+
static int snp_set_vmsa(void *va, bool vmsa)
{
u64 attrs;
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index d314e577836d..dab2dc2207fb 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -468,6 +468,9 @@ void __init sme_early_init(void)
x86_platform.guest.enc_tlb_flush_required = amd_enc_tlb_flush_required;
x86_platform.guest.enc_cache_flush_required = amd_enc_cache_flush_required;
+ x86_platform.guest.enc_kexec_stop_conversion = snp_kexec_stop_conversion;
+ x86_platform.guest.enc_kexec_unshare_mem = snp_kexec_unshare_mem;
+
/*
* AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the
* parallel bringup low level code. That raises #VC which cannot be
--
2.34.1
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 17+ messages in thread