* Re: [PATCHv7 01/16] x86/acpi: Extract ACPI MADT wakeup code into a separate file [not found] ` <20240212104448.2589568-2-kirill.shutemov@linux.intel.com> @ 2024-02-19 4:45 ` Baoquan He 2024-02-19 10:08 ` Kirill A. Shutemov 2024-02-23 10:32 ` Thomas Gleixner 1 sibling, 1 reply; 39+ messages in thread From: Baoquan He @ 2024-02-19 4:45 UTC (permalink / raw) To: Kirill A. Shutemov Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On 02/12/24 at 12:44pm, Kirill A. Shutemov wrote: > In order to prepare for the expansion of support for the ACPI MADT > wakeup method, move the relevant code into a separate file. > > Introduce a new configuration option to clearly indicate dependencies > without the use of ifdefs. > > There have been no functional changes. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> > Acked-by: Kai Huang <kai.huang@intel.com> > --- > arch/x86/Kconfig | 7 +++ > arch/x86/include/asm/acpi.h | 5 ++ > arch/x86/kernel/acpi/Makefile | 11 ++-- > arch/x86/kernel/acpi/boot.c | 86 +----------------------------- > arch/x86/kernel/acpi/madt_wakeup.c | 82 ++++++++++++++++++++++++++++ > 5 files changed, 101 insertions(+), 90 deletions(-) > create mode 100644 arch/x86/kernel/acpi/madt_wakeup.c > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > index 5edec175b9bf..1c1c06f6c0f1 100644 > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -1108,6 +1108,13 @@ config X86_LOCAL_APIC > depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC || PCI_MSI > select IRQ_DOMAIN_HIERARCHY > > +config X86_ACPI_MADT_WAKEUP > + def_bool y > + depends on X86_64 > + depends on ACPI > + depends on SMP > + depends on X86_LOCAL_APIC > + > config X86_IO_APIC > def_bool y > depends on X86_LOCAL_APIC || X86_UP_IOAPIC > diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h > index f896eed4516c..2625b915ae7f 100644 > --- a/arch/x86/include/asm/acpi.h > +++ b/arch/x86/include/asm/acpi.h > @@ -76,6 +76,11 @@ static inline bool acpi_skip_set_wakeup_address(void) > > #define acpi_skip_set_wakeup_address acpi_skip_set_wakeup_address > > +union acpi_subtable_headers; > + > +int __init acpi_parse_mp_wake(union acpi_subtable_headers *header, > + const unsigned long end); > + > /* > * Check if the CPU can handle C2 and deeper > */ > diff --git a/arch/x86/kernel/acpi/Makefile b/arch/x86/kernel/acpi/Makefile > index fc17b3f136fe..8c7329c88a75 100644 > --- a/arch/x86/kernel/acpi/Makefile > +++ b/arch/x86/kernel/acpi/Makefile > @@ -1,11 +1,12 @@ > # SPDX-License-Identifier: GPL-2.0 > > -obj-$(CONFIG_ACPI) += boot.o > -obj-$(CONFIG_ACPI_SLEEP) += sleep.o wakeup_$(BITS).o > -obj-$(CONFIG_ACPI_APEI) += apei.o > -obj-$(CONFIG_ACPI_CPPC_LIB) += cppc.o > +obj-$(CONFIG_ACPI) += boot.o > +obj-$(CONFIG_ACPI_SLEEP) += sleep.o wakeup_$(BITS).o > +obj-$(CONFIG_ACPI_APEI) += apei.o > +obj-$(CONFIG_ACPI_CPPC_LIB) += cppc.o > +obj-$(CONFIG_X86_ACPI_MADT_WAKEUP) += madt_wakeup.o > > ifneq ($(CONFIG_ACPI_PROCESSOR),) > -obj-y += cstate.o > +obj-y += cstate.o > endif > > diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c > index 85a3ce2a3666..df3384dc42c7 100644 > --- a/arch/x86/kernel/acpi/boot.c > +++ b/arch/x86/kernel/acpi/boot.c > @@ -67,13 +67,6 @@ static bool has_lapic_cpus __initdata; > static bool acpi_support_online_capable; > #endif > > -#ifdef CONFIG_X86_64 > -/* Physical address of the Multiprocessor Wakeup Structure mailbox */ > -static u64 acpi_mp_wake_mailbox_paddr; > -/* Virtual address of the Multiprocessor Wakeup Structure mailbox */ > -static struct acpi_madt_multiproc_wakeup_mailbox *acpi_mp_wake_mailbox; > -#endif > - > #ifdef CONFIG_X86_IO_APIC > /* > * Locks related to IOAPIC hotplug > @@ -370,60 +363,6 @@ acpi_parse_lapic_nmi(union acpi_subtable_headers * header, const unsigned long e > > return 0; > } > - > -#ifdef CONFIG_X86_64 > -static int acpi_wakeup_cpu(u32 apicid, unsigned long start_ip) > -{ > - /* > - * Remap mailbox memory only for the first call to acpi_wakeup_cpu(). > - * > - * Wakeup of secondary CPUs is fully serialized in the core code. > - * No need to protect acpi_mp_wake_mailbox from concurrent accesses. > - */ > - if (!acpi_mp_wake_mailbox) { > - acpi_mp_wake_mailbox = memremap(acpi_mp_wake_mailbox_paddr, > - sizeof(*acpi_mp_wake_mailbox), > - MEMREMAP_WB); > - } > - > - /* > - * Mailbox memory is shared between the firmware and OS. Firmware will > - * listen on mailbox command address, and once it receives the wakeup > - * command, the CPU associated with the given apicid will be booted. > - * > - * The value of 'apic_id' and 'wakeup_vector' must be visible to the > - * firmware before the wakeup command is visible. smp_store_release() > - * ensures ordering and visibility. > - */ > - acpi_mp_wake_mailbox->apic_id = apicid; > - acpi_mp_wake_mailbox->wakeup_vector = start_ip; > - smp_store_release(&acpi_mp_wake_mailbox->command, > - ACPI_MP_WAKE_COMMAND_WAKEUP); > - > - /* > - * Wait for the CPU to wake up. > - * > - * The CPU being woken up is essentially in a spin loop waiting to be > - * woken up. It should not take long for it wake up and acknowledge by > - * zeroing out ->command. > - * > - * ACPI specification doesn't provide any guidance on how long kernel > - * has to wait for a wake up acknowledgement. It also doesn't provide > - * a way to cancel a wake up request if it takes too long. > - * > - * In TDX environment, the VMM has control over how long it takes to > - * wake up secondary. It can postpone scheduling secondary vCPU > - * indefinitely. Giving up on wake up request and reporting error opens > - * possible attack vector for VMM: it can wake up a secondary CPU when > - * kernel doesn't expect it. Wait until positive result of the wake up > - * request. > - */ > - while (READ_ONCE(acpi_mp_wake_mailbox->command)) > - cpu_relax(); > - > - return 0; > -} > -#endif /* CONFIG_X86_64 */ > #endif /* CONFIG_X86_LOCAL_APIC */ > > #ifdef CONFIG_X86_IO_APIC > @@ -1159,29 +1098,6 @@ static int __init acpi_parse_madt_lapic_entries(void) > } > return 0; > } > - > -#ifdef CONFIG_X86_64 > -static int __init acpi_parse_mp_wake(union acpi_subtable_headers *header, > - const unsigned long end) > -{ > - struct acpi_madt_multiproc_wakeup *mp_wake; > - > - if (!IS_ENABLED(CONFIG_SMP)) > - return -ENODEV; > - > - mp_wake = (struct acpi_madt_multiproc_wakeup *)header; > - if (BAD_MADT_ENTRY(mp_wake, end)) > - return -EINVAL; > - > - acpi_table_print_madt_entry(&header->common); > - > - acpi_mp_wake_mailbox_paddr = mp_wake->base_address; > - > - apic_update_callback(wakeup_secondary_cpu_64, acpi_wakeup_cpu); > - > - return 0; > -} > -#endif /* CONFIG_X86_64 */ > #endif /* CONFIG_X86_LOCAL_APIC */ > > #ifdef CONFIG_X86_IO_APIC > @@ -1378,7 +1294,7 @@ static void __init acpi_process_madt(void) > smp_found_config = 1; > } > > -#ifdef CONFIG_X86_64 > +#ifdef CONFIG_X86_ACPI_MADT_WAKEUP > /* > * Parse MADT MP Wake entry. > */ > diff --git a/arch/x86/kernel/acpi/madt_wakeup.c b/arch/x86/kernel/acpi/madt_wakeup.c > new file mode 100644 > index 000000000000..7f164d38bd0b > --- /dev/null > +++ b/arch/x86/kernel/acpi/madt_wakeup.c > @@ -0,0 +1,82 @@ > +// SPDX-License-Identifier: GPL-2.0-or-later > +#include <linux/acpi.h> > +#include <linux/io.h> > +#include <asm/apic.h> > +#include <asm/barrier.h> > +#include <asm/processor.h> > + > +/* Physical address of the Multiprocessor Wakeup Structure mailbox */ > +static u64 acpi_mp_wake_mailbox_paddr; > + > +/* Virtual address of the Multiprocessor Wakeup Structure mailbox */ > +static struct acpi_madt_multiproc_wakeup_mailbox *acpi_mp_wake_mailbox; > + > +static int acpi_wakeup_cpu(u32 apicid, unsigned long start_ip) > +{ > + /* > + * Remap mailbox memory only for the first call to acpi_wakeup_cpu(). > + * > + * Wakeup of secondary CPUs is fully serialized in the core code. > + * No need to protect acpi_mp_wake_mailbox from concurrent accesses. > + */ > + if (!acpi_mp_wake_mailbox) { > + acpi_mp_wake_mailbox = memremap(acpi_mp_wake_mailbox_paddr, > + sizeof(*acpi_mp_wake_mailbox), > + MEMREMAP_WB); > + } > + > + /* > + * Mailbox memory is shared between the firmware and OS. Firmware will > + * listen on mailbox command address, and once it receives the wakeup > + * command, the CPU associated with the given apicid will be booted. > + * > + * The value of 'apic_id' and 'wakeup_vector' must be visible to the > + * firmware before the wakeup command is visible. smp_store_release() > + * ensures ordering and visibility. > + */ > + acpi_mp_wake_mailbox->apic_id = apicid; > + acpi_mp_wake_mailbox->wakeup_vector = start_ip; > + smp_store_release(&acpi_mp_wake_mailbox->command, > + ACPI_MP_WAKE_COMMAND_WAKEUP); > + > + /* > + * Wait for the CPU to wake up. > + * > + * The CPU being woken up is essentially in a spin loop waiting to be > + * woken up. It should not take long for it wake up and acknowledge by > + * zeroing out ->command. > + * > + * ACPI specification doesn't provide any guidance on how long kernel > + * has to wait for a wake up acknowledgment. It also doesn't provide > + * a way to cancel a wake up request if it takes too long. > + * > + * In TDX environment, the VMM has control over how long it takes to > + * wake up secondary. It can postpone scheduling secondary vCPU > + * indefinitely. Giving up on wake up request and reporting error opens > + * possible attack vector for VMM: it can wake up a secondary CPU when > + * kernel doesn't expect it. Wait until positive result of the wake up > + * request. > + */ > + while (READ_ONCE(acpi_mp_wake_mailbox->command)) > + cpu_relax(); > + > + return 0; > +} > + > +int __init acpi_parse_mp_wake(union acpi_subtable_headers *header, > + const unsigned long end) > +{ > + struct acpi_madt_multiproc_wakeup *mp_wake; > + > + mp_wake = (struct acpi_madt_multiproc_wakeup *)header; > + if (BAD_MADT_ENTRY(mp_wake, end)) > + return -EINVAL; > + > + acpi_table_print_madt_entry(&header->common); Do we need add the entry printing for ACPI_MADT_TYPE_MULTIPROC_WAKEUP now in acpi_table_print_madt_entry()? Surely it's not related to this patch. FWIW, Reviewed-by: Baoquan He <bhe@redhat.com> > + > + acpi_mp_wake_mailbox_paddr = mp_wake->base_address; > + > + apic_update_callback(wakeup_secondary_cpu_64, acpi_wakeup_cpu); > + > + return 0; > +} > -- > 2.43.0 > _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 01/16] x86/acpi: Extract ACPI MADT wakeup code into a separate file 2024-02-19 4:45 ` [PATCHv7 01/16] x86/acpi: Extract ACPI MADT wakeup code into a separate file Baoquan He @ 2024-02-19 10:08 ` Kirill A. Shutemov 2024-02-19 11:36 ` Baoquan He 0 siblings, 1 reply; 39+ messages in thread From: Kirill A. Shutemov @ 2024-02-19 10:08 UTC (permalink / raw) To: Baoquan He Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On Mon, Feb 19, 2024 at 12:45:31PM +0800, Baoquan He wrote: > Do we need add the entry printing for ACPI_MADT_TYPE_MULTIPROC_WAKEUP > now in acpi_table_print_madt_entry()? Surely it's not related to this > patch. Good catch. See patch below. Does it look okay? > FWIW, > > Reviewed-by: Baoquan He <bhe@redhat.com> Thanks! From 23b7f9856a3d6b91c702def1e03872a60ae07d0e Mon Sep 17 00:00:00 2001 From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Date: Mon, 19 Feb 2024 11:58:19 +0200 Subject: [PATCH] ACPI: tables: Print MULTIPROC_WAKEUP when MADT is parse When MADT is parsed, print MULTIPROC_WAKEUP information: ACPI: MP Wakeup (version[1], mailbox[0x7fffd000], reset[0x7fffe068]) This debug information will be very helpful during bring up. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- drivers/acpi/tables.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c index b07f7d091d13..c59a3617bca7 100644 --- a/drivers/acpi/tables.c +++ b/drivers/acpi/tables.c @@ -198,6 +198,20 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header) } break; + case ACPI_MADT_TYPE_MULTIPROC_WAKEUP: + { + struct acpi_madt_multiproc_wakeup *p = + (struct acpi_madt_multiproc_wakeup *)header; + u64 reset_vector = 0; + + if (p->version >= ACPI_MADT_MP_WAKEUP_VERSION_V1) + reset_vector = p->reset_vector; + + pr_debug("MP Wakeup (version[%d], mailbox[%#llx], reset[%#llx])\n", + p->version, p->mailbox_address, reset_vector); + } + break; + case ACPI_MADT_TYPE_CORE_PIC: { struct acpi_madt_core_pic *p = (struct acpi_madt_core_pic *)header; -- 2.43.0 -- Kiryl Shutsemau / Kirill A. Shutemov _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCHv7 01/16] x86/acpi: Extract ACPI MADT wakeup code into a separate file 2024-02-19 10:08 ` Kirill A. Shutemov @ 2024-02-19 11:36 ` Baoquan He 0 siblings, 0 replies; 39+ messages in thread From: Baoquan He @ 2024-02-19 11:36 UTC (permalink / raw) To: Kirill A. Shutemov Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On 02/19/24 at 12:08pm, Kirill A. Shutemov wrote: > On Mon, Feb 19, 2024 at 12:45:31PM +0800, Baoquan He wrote: > > Do we need add the entry printing for ACPI_MADT_TYPE_MULTIPROC_WAKEUP > > now in acpi_table_print_madt_entry()? Surely it's not related to this > > patch. > > Good catch. See patch below. Does it look okay? Looks good to me, thanks. Reviewed-by: Baoquan He <bhe@redhat.com> > > > FWIW, > > > > Reviewed-by: Baoquan He <bhe@redhat.com> > > Thanks! > > From 23b7f9856a3d6b91c702def1e03872a60ae07d0e Mon Sep 17 00:00:00 2001 > From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> > Date: Mon, 19 Feb 2024 11:58:19 +0200 > Subject: [PATCH] ACPI: tables: Print MULTIPROC_WAKEUP when MADT is parse > > When MADT is parsed, print MULTIPROC_WAKEUP information: > > ACPI: MP Wakeup (version[1], mailbox[0x7fffd000], reset[0x7fffe068]) > > This debug information will be very helpful during bring up. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > --- > drivers/acpi/tables.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c > index b07f7d091d13..c59a3617bca7 100644 > --- a/drivers/acpi/tables.c > +++ b/drivers/acpi/tables.c > @@ -198,6 +198,20 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header) > } > break; > > + case ACPI_MADT_TYPE_MULTIPROC_WAKEUP: > + { > + struct acpi_madt_multiproc_wakeup *p = > + (struct acpi_madt_multiproc_wakeup *)header; > + u64 reset_vector = 0; > + > + if (p->version >= ACPI_MADT_MP_WAKEUP_VERSION_V1) > + reset_vector = p->reset_vector; > + > + pr_debug("MP Wakeup (version[%d], mailbox[%#llx], reset[%#llx])\n", > + p->version, p->mailbox_address, reset_vector); > + } > + break; > + > case ACPI_MADT_TYPE_CORE_PIC: > { > struct acpi_madt_core_pic *p = (struct acpi_madt_core_pic *)header; > -- > 2.43.0 > > -- > Kiryl Shutsemau / Kirill A. Shutemov > _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 01/16] x86/acpi: Extract ACPI MADT wakeup code into a separate file [not found] ` <20240212104448.2589568-2-kirill.shutemov@linux.intel.com> 2024-02-19 4:45 ` [PATCHv7 01/16] x86/acpi: Extract ACPI MADT wakeup code into a separate file Baoquan He @ 2024-02-23 10:32 ` Thomas Gleixner 1 sibling, 0 replies; 39+ messages in thread From: Thomas Gleixner @ 2024-02-23 10:32 UTC (permalink / raw) To: Kirill A. Shutemov, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel, Kirill A. Shutemov On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote: > In order to prepare for the expansion of support for the ACPI MADT > wakeup method, move the relevant code into a separate file. > > Introduce a new configuration option to clearly indicate dependencies > without the use of ifdefs. > > There have been no functional changes. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> > Acked-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH 0/2] x86/snp: Add kexec support [not found] <20240212104448.2589568-1-kirill.shutemov@linux.intel.com> [not found] ` <20240212104448.2589568-2-kirill.shutemov@linux.intel.com> @ 2024-02-20 1:18 ` Ashish Kalra 2024-02-20 1:18 ` [PATCH 1/2] x86/mm: Do not zap PMD entry mapping unaccepted memory table during kdump Ashish Kalra 2024-02-20 1:18 ` [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec Ashish Kalra [not found] ` <20240212104448.2589568-6-kirill.shutemov@linux.intel.com> ` (10 subsequent siblings) 12 siblings, 2 replies; 39+ messages in thread From: Ashish Kalra @ 2024-02-20 1:18 UTC (permalink / raw) To: tglx, mingo, bp, dave.hansen, luto, x86 Cc: ardb, hpa, linux-efi, linux-kernel, rafael, peterz, adrian.hunter, sathyanarayanan.kuppuswamy, elena.reshetova, jun.nakajima, rick.p.edgecombe, thomas.lendacky, seanjc, kai.huang, bhe, kexec, linux-coco, kirill.shutemov, anisinha, michael.roth, bdas, vkuznets, dionnaglaze, jroedel, ashwin.kamat From: Ashish Kalra <ashish.kalra@amd.com> The patchset adds bits and pieces to get kexec (and crashkernel) work on SNP guest. This patchset requires [1] for chained guest kexec to work correctly. [1]: https://lore.kernel.org/lkml/20240219225451.787816-1-Ashish.Kalra@amd.com/ Ashish Kalra (2): x86/mm: Do not zap PMD entry mapping unaccepted memory table during kdump. x86/snp: Convert shared memory back to private on kexec arch/x86/include/asm/probe_roms.h | 1 + arch/x86/include/asm/sev.h | 8 ++ arch/x86/kernel/probe_roms.c | 16 +++ arch/x86/kernel/sev.c | 211 ++++++++++++++++++++++++++++++ arch/x86/mm/init_64.c | 4 +- arch/x86/mm/mem_encrypt_amd.c | 18 ++- 6 files changed, 256 insertions(+), 2 deletions(-) -- 2.34.1 _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH 1/2] x86/mm: Do not zap PMD entry mapping unaccepted memory table during kdump. 2024-02-20 1:18 ` [PATCH 0/2] x86/snp: Add kexec support Ashish Kalra @ 2024-02-20 1:18 ` Ashish Kalra 2024-02-20 12:42 ` Kirill A. Shutemov 2024-02-20 1:18 ` [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec Ashish Kalra 1 sibling, 1 reply; 39+ messages in thread From: Ashish Kalra @ 2024-02-20 1:18 UTC (permalink / raw) To: tglx, mingo, bp, dave.hansen, luto, x86 Cc: ardb, hpa, linux-efi, linux-kernel, rafael, peterz, adrian.hunter, sathyanarayanan.kuppuswamy, elena.reshetova, jun.nakajima, rick.p.edgecombe, thomas.lendacky, seanjc, kai.huang, bhe, kexec, linux-coco, kirill.shutemov, anisinha, michael.roth, bdas, vkuznets, dionnaglaze, jroedel, ashwin.kamat From: Ashish Kalra <ashish.kalra@amd.com> During crashkernel boot only pre-allocated crash memory is presented as E820_TYPE_RAM. This can cause PMD entry mapping unaccepted memory table to be zapped during phys_pmd_init() as SNP/TDX guest use E820_TYPE_ACPI to store the unaccepted memory table and pass it between the kernels on kexec/kdump. E820_TYPE_ACPI covers not only ACPI data, but also EFI tables and might be required by kernel to function properly. The problem was discovered during debugging kdump for SNP guest. The unaccepted memory table stored with E820_TYPE_ACPI and passed between the kernels on kdump was getting zapped as the PMD entry mapping this is above the E820_TYPE_RAM range for the reserved crashkernel memory. Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> --- arch/x86/mm/init_64.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index a0dffaca6d2b..207c6dddde0c 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -524,7 +524,9 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, !e820__mapped_any(paddr & PMD_MASK, paddr_next, E820_TYPE_RAM) && !e820__mapped_any(paddr & PMD_MASK, paddr_next, - E820_TYPE_RESERVED_KERN)) + E820_TYPE_RESERVED_KERN) && + !e820__mapped_any(paddr & PMD_MASK, paddr_next, + E820_TYPE_ACPI)) set_pmd_init(pmd, __pmd(0), init); continue; } -- 2.34.1 _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCH 1/2] x86/mm: Do not zap PMD entry mapping unaccepted memory table during kdump. 2024-02-20 1:18 ` [PATCH 1/2] x86/mm: Do not zap PMD entry mapping unaccepted memory table during kdump Ashish Kalra @ 2024-02-20 12:42 ` Kirill A. Shutemov 2024-02-20 19:09 ` Kalra, Ashish 0 siblings, 1 reply; 39+ messages in thread From: Kirill A. Shutemov @ 2024-02-20 12:42 UTC (permalink / raw) To: Ashish Kalra Cc: tglx, mingo, bp, dave.hansen, luto, x86, ardb, hpa, linux-efi, linux-kernel, rafael, peterz, adrian.hunter, sathyanarayanan.kuppuswamy, elena.reshetova, jun.nakajima, rick.p.edgecombe, thomas.lendacky, seanjc, kai.huang, bhe, kexec, linux-coco, anisinha, michael.roth, bdas, vkuznets, dionnaglaze, jroedel, ashwin.kamat On Tue, Feb 20, 2024 at 01:18:29AM +0000, Ashish Kalra wrote: > From: Ashish Kalra <ashish.kalra@amd.com> > > During crashkernel boot only pre-allocated crash memory is presented as > E820_TYPE_RAM. This can cause PMD entry mapping unaccepted memory table > to be zapped during phys_pmd_init() as SNP/TDX guest use E820_TYPE_ACPI > to store the unaccepted memory table and pass it between the kernels on > kexec/kdump. > > E820_TYPE_ACPI covers not only ACPI data, but also EFI tables and might > be required by kernel to function properly. > > The problem was discovered during debugging kdump for SNP guest. The > unaccepted memory table stored with E820_TYPE_ACPI and passed between > the kernels on kdump was getting zapped as the PMD entry mapping this > is above the E820_TYPE_RAM range for the reserved crashkernel memory. > > Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> > --- > arch/x86/mm/init_64.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c > index a0dffaca6d2b..207c6dddde0c 100644 > --- a/arch/x86/mm/init_64.c > +++ b/arch/x86/mm/init_64.c > @@ -524,7 +524,9 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, > !e820__mapped_any(paddr & PMD_MASK, paddr_next, > E820_TYPE_RAM) && > !e820__mapped_any(paddr & PMD_MASK, paddr_next, > - E820_TYPE_RESERVED_KERN)) > + E820_TYPE_RESERVED_KERN) && > + !e820__mapped_any(paddr & PMD_MASK, paddr_next, > + E820_TYPE_ACPI)) > set_pmd_init(pmd, __pmd(0), init); > continue; Why do you single out phys_pmd_init()? I think it has to be addressed for all page table levels as we do for E820_TYPE_RAM and E820_TYPE_RESERVED_KERN. -- Kiryl Shutsemau / Kirill A. Shutemov _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH 1/2] x86/mm: Do not zap PMD entry mapping unaccepted memory table during kdump. 2024-02-20 12:42 ` Kirill A. Shutemov @ 2024-02-20 19:09 ` Kalra, Ashish 0 siblings, 0 replies; 39+ messages in thread From: Kalra, Ashish @ 2024-02-20 19:09 UTC (permalink / raw) To: Kirill A. Shutemov Cc: tglx, mingo, bp, dave.hansen, luto, x86, ardb, hpa, linux-efi, linux-kernel, rafael, peterz, adrian.hunter, sathyanarayanan.kuppuswamy, elena.reshetova, jun.nakajima, rick.p.edgecombe, thomas.lendacky, seanjc, kai.huang, bhe, kexec, linux-coco, anisinha, michael.roth, bdas, vkuznets, dionnaglaze, jroedel, ashwin.kamat Hi Kirill, On 2/20/2024 6:42 AM, Kirill A. Shutemov wrote: > On Tue, Feb 20, 2024 at 01:18:29AM +0000, Ashish Kalra wrote: >> From: Ashish Kalra <ashish.kalra@amd.com> >> >> During crashkernel boot only pre-allocated crash memory is presented as >> E820_TYPE_RAM. This can cause PMD entry mapping unaccepted memory table >> to be zapped during phys_pmd_init() as SNP/TDX guest use E820_TYPE_ACPI >> to store the unaccepted memory table and pass it between the kernels on >> kexec/kdump. >> >> E820_TYPE_ACPI covers not only ACPI data, but also EFI tables and might >> be required by kernel to function properly. >> >> The problem was discovered during debugging kdump for SNP guest. The >> unaccepted memory table stored with E820_TYPE_ACPI and passed between >> the kernels on kdump was getting zapped as the PMD entry mapping this >> is above the E820_TYPE_RAM range for the reserved crashkernel memory. >> >> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> >> --- >> arch/x86/mm/init_64.c | 4 +++- >> 1 file changed, 3 insertions(+), 1 deletion(-) >> >> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c >> index a0dffaca6d2b..207c6dddde0c 100644 >> --- a/arch/x86/mm/init_64.c >> +++ b/arch/x86/mm/init_64.c >> @@ -524,7 +524,9 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end, >> !e820__mapped_any(paddr & PMD_MASK, paddr_next, >> E820_TYPE_RAM) && >> !e820__mapped_any(paddr & PMD_MASK, paddr_next, >> - E820_TYPE_RESERVED_KERN)) >> + E820_TYPE_RESERVED_KERN) && >> + !e820__mapped_any(paddr & PMD_MASK, paddr_next, >> + E820_TYPE_ACPI)) >> set_pmd_init(pmd, __pmd(0), init); >> continue; > Why do you single out phys_pmd_init()? I think it has to be addressed for > all page table levels as we do for E820_TYPE_RAM and E820_TYPE_RESERVED_KERN. I believe i only discovered the issue with PMDe's (phys_pmd_init()) because of the crashkernel reserved memory size and the E820_TYPE_ACPI physical memory range mapping on my test system, but you are right this fix needs to be done for all page table levels and i will add also the fix in phys_pte_init(), phys_pud_init() and phys_p4d_init(). Thanks, Ashish _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec 2024-02-20 1:18 ` [PATCH 0/2] x86/snp: Add kexec support Ashish Kalra 2024-02-20 1:18 ` [PATCH 1/2] x86/mm: Do not zap PMD entry mapping unaccepted memory table during kdump Ashish Kalra @ 2024-02-20 1:18 ` Ashish Kalra 2024-02-21 20:35 ` Tom Lendacky 1 sibling, 1 reply; 39+ messages in thread From: Ashish Kalra @ 2024-02-20 1:18 UTC (permalink / raw) To: tglx, mingo, bp, dave.hansen, luto, x86 Cc: ardb, hpa, linux-efi, linux-kernel, rafael, peterz, adrian.hunter, sathyanarayanan.kuppuswamy, elena.reshetova, jun.nakajima, rick.p.edgecombe, thomas.lendacky, seanjc, kai.huang, bhe, kexec, linux-coco, kirill.shutemov, anisinha, michael.roth, bdas, vkuznets, dionnaglaze, jroedel, ashwin.kamat From: Ashish Kalra <ashish.kalra@amd.com> SNP guests allocate shared buffers to perform I/O. It is done by allocating pages normally from the buddy allocator and converting them to shared with set_memory_decrypted(). The second kernel has no idea what memory is converted this way. It only sees E820_TYPE_RAM. Accessing shared memory via private mapping will cause unrecoverable RMP page-faults. On kexec walk direct mapping and convert all shared memory back to private. It makes all RAM private again and second kernel may use it normally. Additionally for SNP guests convert all bss decrypted section pages back to private and switch back ROM regions to shared so that their revalidation does not fail during kexec kernel boot. The conversion occurs in two steps: stopping new conversions and unsharing all memory. In the case of normal kexec, the stopping of conversions takes place while scheduling is still functioning. This allows for waiting until any ongoing conversions are finished. The second step is carried out when all CPUs except one are inactive and interrupts are disabled. This prevents any conflicts with code that may access shared memory. Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> --- arch/x86/include/asm/probe_roms.h | 1 + arch/x86/include/asm/sev.h | 8 ++ arch/x86/kernel/probe_roms.c | 16 +++ arch/x86/kernel/sev.c | 211 ++++++++++++++++++++++++++++++ arch/x86/mm/mem_encrypt_amd.c | 18 ++- 5 files changed, 253 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/probe_roms.h b/arch/x86/include/asm/probe_roms.h index 1c7f3815bbd6..d50b67dbff33 100644 --- a/arch/x86/include/asm/probe_roms.h +++ b/arch/x86/include/asm/probe_roms.h @@ -6,4 +6,5 @@ struct pci_dev; extern void __iomem *pci_map_biosrom(struct pci_dev *pdev); extern void pci_unmap_biosrom(void __iomem *rom); extern size_t pci_biosrom_size(struct pci_dev *pdev); +extern void snp_kexec_unprep_rom_memory(void); #endif diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index 5b4a1ce3d368..dd236d7e9407 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -81,6 +81,10 @@ extern void vc_no_ghcb(void); extern void vc_boot_ghcb(void); extern bool handle_vc_boot_ghcb(struct pt_regs *regs); +extern atomic_t conversions_in_progress; +extern bool conversion_allowed; +extern unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot); + /* PVALIDATE return codes */ #define PVALIDATE_FAIL_SIZEMISMATCH 6 @@ -213,6 +217,8 @@ int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct sn void snp_accept_memory(phys_addr_t start, phys_addr_t end); u64 snp_get_unsupported_features(u64 status); u64 sev_get_status(void); +void snp_kexec_unshare_mem(void); +void snp_kexec_stop_conversion(bool crash); #else static inline void sev_es_ist_enter(struct pt_regs *regs) { } static inline void sev_es_ist_exit(void) { } @@ -241,6 +247,8 @@ static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *in static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { } static inline u64 snp_get_unsupported_features(u64 status) { return 0; } static inline u64 sev_get_status(void) { return 0; } +void snp_kexec_unshare_mem(void) {} +static void snp_kexec_stop_conversion(bool crash) {} #endif #endif diff --git a/arch/x86/kernel/probe_roms.c b/arch/x86/kernel/probe_roms.c index 319fef37d9dc..457f1e5c8d00 100644 --- a/arch/x86/kernel/probe_roms.c +++ b/arch/x86/kernel/probe_roms.c @@ -177,6 +177,22 @@ size_t pci_biosrom_size(struct pci_dev *pdev) } EXPORT_SYMBOL(pci_biosrom_size); +void snp_kexec_unprep_rom_memory(void) +{ + unsigned long vaddr, npages, sz; + + /* + * Switch back ROM regions to shared so that their validation + * does not fail during kexec kernel boot. + */ + vaddr = (unsigned long)__va(video_rom_resource.start); + sz = (system_rom_resource.end + 1) - video_rom_resource.start; + npages = PAGE_ALIGN(sz) >> PAGE_SHIFT; + + snp_set_memory_shared(vaddr, npages); +} +EXPORT_SYMBOL(snp_kexec_unprep_rom_memory); + #define ROMSIGNATURE 0xaa55 static int __init romsignature(const unsigned char *rom) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index c67285824e82..765ab83129eb 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -23,6 +23,9 @@ #include <linux/platform_device.h> #include <linux/io.h> #include <linux/psp-sev.h> +#include <linux/pagewalk.h> +#include <linux/cacheflush.h> +#include <linux/delay.h> #include <uapi/linux/sev-guest.h> #include <asm/cpu_entry_area.h> @@ -40,6 +43,7 @@ #include <asm/apic.h> #include <asm/cpuid.h> #include <asm/cmdline.h> +#include <asm/probe_roms.h> #define DR7_RESET_VALUE 0x400 @@ -71,6 +75,13 @@ static struct ghcb *boot_ghcb __section(".data"); /* Bitmap of SEV features supported by the hypervisor */ static u64 sev_hv_features __ro_after_init; +/* Last address to be switched to private during kexec */ +static unsigned long last_address_shd_kexec; + +static bool crash_requested; +atomic_t conversions_in_progress; +bool conversion_allowed = true; + /* #VC handler runtime per-CPU data */ struct sev_es_runtime_data { struct ghcb ghcb_page; @@ -906,6 +917,206 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end) set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); } +static inline bool pte_decrypted(pte_t pte) +{ + return cc_mkdec(pte_val(pte)) == pte_val(pte); +} + +static int set_pte_enc(pte_t *kpte, int level, void *va) +{ + pgprot_t old_prot, new_prot; + unsigned long pfn, pa, size; + pte_t new_pte; + + pfn = pg_level_to_pfn(level, kpte, &old_prot); + if (!pfn) + return 0; + + new_prot = old_prot; + pgprot_val(new_prot) |= _PAGE_ENC; + pa = pfn << PAGE_SHIFT; + size = page_level_size(level); + + /* + * Change the physical page attribute from C=0 to C=1. Flush the + * caches to ensure that data gets accessed with the correct C-bit. + */ + clflush_cache_range(va, size); + + /* Change the page encryption mask. */ + new_pte = pfn_pte(pfn, new_prot); + set_pte_atomic(kpte, new_pte); + + return 1; +} + +static int unshare_pte(pte_t *pte, unsigned long addr, int pages, int level) +{ + struct sev_es_runtime_data *data; + struct ghcb *ghcb; + + data = this_cpu_read(runtime_data); + ghcb = &data->ghcb_page; + + /* + * check for GHCB for being part of a PMD range. + */ + if ((unsigned long)ghcb >= addr && + (unsigned long)ghcb <= (addr + (pages * PAGE_SIZE))) { + /* + * setup last address to be made private so that this GHCB + * is made private at the end of unshared loop so that RMP + * does not possibly getting PSMASHed from using the + * MSR protocol. + */ + pr_debug("setting boot_ghcb to NULL for this cpu ghcb\n"); + last_address_shd_kexec = addr; + return 1; + } + if (!set_pte_enc(pte, level, (void *)addr)) + return 0; + snp_set_memory_private(addr, pages); + + return 1; +} + +static void unshare_all_memory(bool unmap) +{ + unsigned long addr, end; + + /* + * Walk direct mapping and convert all shared memory back to private, + */ + + addr = PAGE_OFFSET; + end = PAGE_OFFSET + get_max_mapped(); + + while (addr < end) { + unsigned long size; + unsigned int level; + pte_t *pte; + + pte = lookup_address(addr, &level); + size = page_level_size(level); + + /* + * pte_none() check is required to skip physical memory holes in direct mapped. + */ + if (pte && pte_decrypted(*pte) && !pte_none(*pte)) { + int pages = size / PAGE_SIZE; + + if (!unshare_pte(pte, addr, pages, level)) { + pr_err("Failed to unshare range %#lx-%#lx\n", + addr, addr + size); + } + + } + + addr += size; + } + __flush_tlb_all(); + +} + +static void unshare_all_bss_decrypted_memory(void) +{ + unsigned long vaddr, vaddr_end; + unsigned long size; + unsigned int level; + unsigned int npages; + pte_t *pte; + + vaddr = (unsigned long)__start_bss_decrypted; + vaddr_end = (unsigned long)__start_bss_decrypted_unused; + npages = (vaddr_end - vaddr) >> PAGE_SHIFT; + for (; vaddr < vaddr_end; vaddr += PAGE_SIZE) { + pte = lookup_address(vaddr, &level); + if (!pte || !pte_decrypted(*pte) || pte_none(*pte)) + continue; + + size = page_level_size(level); + set_pte_enc(pte, level, (void *)vaddr); + } + vaddr = (unsigned long)__start_bss_decrypted; + snp_set_memory_private(vaddr, npages); +} + +void snp_kexec_stop_conversion(bool crash) +{ + /* Stop new private<->shared conversions */ + conversion_allowed = false; + crash_requested = crash; + + /* + * Make sure conversion_allowed is cleared before checking + * conversions_in_progress. + */ + barrier(); + + /* + * Crash kernel reaches here with interrupts disabled: can't wait for + * conversions to finish. + * + * If race happened, just report and proceed. + */ + if (!crash) { + unsigned long timeout; + + /* + * Wait for in-flight conversions to complete. + * + * Do not wait more than 30 seconds. + */ + timeout = 30 * USEC_PER_SEC; + while (atomic_read(&conversions_in_progress) && timeout--) + udelay(1); + } + + if (atomic_read(&conversions_in_progress)) + pr_warn("Failed to finish shared<->private conversions\n"); +} + +void snp_kexec_unshare_mem(void) +{ + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + return; + + /* + * Switch back any specific memory regions such as option + * ROM regions back to shared so that (re)validation does + * not fail when kexec kernel boots. + */ + snp_kexec_unprep_rom_memory(); + + unshare_all_memory(true); + + unshare_all_bss_decrypted_memory(); + + if (last_address_shd_kexec) { + unsigned long size; + unsigned int level; + pte_t *pte; + + /* + * Switch to using the MSR protocol to change this cpu's + * GHCB to private. + */ + boot_ghcb = NULL; + /* + * All the per-cpu GHCBs have been switched back to private, + * so can't do any more GHCB calls to the hypervisor beyond + * this point till the kexec kernel starts running. + */ + sev_cfg.ghcbs_initialized = false; + + pr_debug("boot ghcb 0x%lx\n", last_address_shd_kexec); + pte = lookup_address(last_address_shd_kexec, &level); + size = page_level_size(level); + set_pte_enc(pte, level, (void *)last_address_shd_kexec); + snp_set_memory_private(last_address_shd_kexec, (size / PAGE_SIZE)); + } +} + static int snp_set_vmsa(void *va, bool vmsa) { u64 attrs; diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c index d314e577836d..87b6475358ad 100644 --- a/arch/x86/mm/mem_encrypt_amd.c +++ b/arch/x86/mm/mem_encrypt_amd.c @@ -214,7 +214,7 @@ void __init sme_map_bootdata(char *real_mode_data) __sme_early_map_unmap_mem(__va(cmdline_paddr), COMMAND_LINE_SIZE, true); } -static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot) +unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot) { unsigned long pfn = 0; pgprot_t prot; @@ -285,6 +285,17 @@ static void enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc) static int amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc) { + atomic_inc(&conversions_in_progress); + + /* + * Check after bumping conversions_in_progress to serialize + * against snp_kexec_stop_conversion(). + */ + if (!conversion_allowed) { + atomic_dec(&conversions_in_progress); + return -EBUSY; + } + /* * To maintain the security guarantees of SEV-SNP guests, make sure * to invalidate the memory before encryption attribute is cleared. @@ -308,6 +319,8 @@ static int amd_enc_status_change_finish(unsigned long vaddr, int npages, bool en if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) enc_dec_hypercall(vaddr, npages << PAGE_SHIFT, enc); + atomic_dec(&conversions_in_progress); + return 0; } @@ -468,6 +481,9 @@ void __init sme_early_init(void) x86_platform.guest.enc_tlb_flush_required = amd_enc_tlb_flush_required; x86_platform.guest.enc_cache_flush_required = amd_enc_cache_flush_required; + x86_platform.guest.enc_kexec_stop_conversion = snp_kexec_stop_conversion; + x86_platform.guest.enc_kexec_unshare_mem = snp_kexec_unshare_mem; + /* * AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the * parallel bringup low level code. That raises #VC which cannot be -- 2.34.1 _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec 2024-02-20 1:18 ` [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec Ashish Kalra @ 2024-02-21 20:35 ` Tom Lendacky 2024-02-22 10:50 ` Kirill A. Shutemov 0 siblings, 1 reply; 39+ messages in thread From: Tom Lendacky @ 2024-02-21 20:35 UTC (permalink / raw) To: Ashish Kalra, tglx, mingo, bp, dave.hansen, luto, x86 Cc: ardb, hpa, linux-efi, linux-kernel, rafael, peterz, adrian.hunter, sathyanarayanan.kuppuswamy, elena.reshetova, jun.nakajima, rick.p.edgecombe, seanjc, kai.huang, bhe, kexec, linux-coco, kirill.shutemov, anisinha, michael.roth, bdas, vkuznets, dionnaglaze, jroedel, ashwin.kamat On 2/19/24 19:18, Ashish Kalra wrote: > From: Ashish Kalra <ashish.kalra@amd.com> > > SNP guests allocate shared buffers to perform I/O. It is done by > allocating pages normally from the buddy allocator and converting them > to shared with set_memory_decrypted(). > > The second kernel has no idea what memory is converted this way. It only > sees E820_TYPE_RAM. > > Accessing shared memory via private mapping will cause unrecoverable RMP > page-faults. > > On kexec walk direct mapping and convert all shared memory back to > private. It makes all RAM private again and second kernel may use it > normally. Additionally for SNP guests convert all bss decrypted section > pages back to private and switch back ROM regions to shared so that > their revalidation does not fail during kexec kernel boot. > > The conversion occurs in two steps: stopping new conversions and > unsharing all memory. In the case of normal kexec, the stopping of > conversions takes place while scheduling is still functioning. This > allows for waiting until any ongoing conversions are finished. The > second step is carried out when all CPUs except one are inactive and > interrupts are disabled. This prevents any conflicts with code that may > access shared memory. This seems like this patch should be broken down into multiple patches with the final patch setting x86_platform.guest.enc_kexec_stop_conversion and x86_platform.guest.enc_kexec_unshare_mem > > Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> > --- > arch/x86/include/asm/probe_roms.h | 1 + > arch/x86/include/asm/sev.h | 8 ++ > arch/x86/kernel/probe_roms.c | 16 +++ > arch/x86/kernel/sev.c | 211 ++++++++++++++++++++++++++++++ > arch/x86/mm/mem_encrypt_amd.c | 18 ++- > 5 files changed, 253 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/include/asm/probe_roms.h b/arch/x86/include/asm/probe_roms.h > index 1c7f3815bbd6..d50b67dbff33 100644 > --- a/arch/x86/include/asm/probe_roms.h > +++ b/arch/x86/include/asm/probe_roms.h > @@ -6,4 +6,5 @@ struct pci_dev; > extern void __iomem *pci_map_biosrom(struct pci_dev *pdev); > extern void pci_unmap_biosrom(void __iomem *rom); > extern size_t pci_biosrom_size(struct pci_dev *pdev); > +extern void snp_kexec_unprep_rom_memory(void); > #endif > diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h > index 5b4a1ce3d368..dd236d7e9407 100644 > --- a/arch/x86/include/asm/sev.h > +++ b/arch/x86/include/asm/sev.h > @@ -81,6 +81,10 @@ extern void vc_no_ghcb(void); > extern void vc_boot_ghcb(void); > extern bool handle_vc_boot_ghcb(struct pt_regs *regs); > > +extern atomic_t conversions_in_progress; > +extern bool conversion_allowed; > +extern unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot); > + > /* PVALIDATE return codes */ > #define PVALIDATE_FAIL_SIZEMISMATCH 6 > > @@ -213,6 +217,8 @@ int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct sn > void snp_accept_memory(phys_addr_t start, phys_addr_t end); > u64 snp_get_unsupported_features(u64 status); > u64 sev_get_status(void); > +void snp_kexec_unshare_mem(void); > +void snp_kexec_stop_conversion(bool crash); > #else > static inline void sev_es_ist_enter(struct pt_regs *regs) { } > static inline void sev_es_ist_exit(void) { } > @@ -241,6 +247,8 @@ static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *in > static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { } > static inline u64 snp_get_unsupported_features(u64 status) { return 0; } > static inline u64 sev_get_status(void) { return 0; } > +void snp_kexec_unshare_mem(void) {} > +static void snp_kexec_stop_conversion(bool crash) {} > #endif > > #endif > diff --git a/arch/x86/kernel/probe_roms.c b/arch/x86/kernel/probe_roms.c > index 319fef37d9dc..457f1e5c8d00 100644 > --- a/arch/x86/kernel/probe_roms.c > +++ b/arch/x86/kernel/probe_roms.c > @@ -177,6 +177,22 @@ size_t pci_biosrom_size(struct pci_dev *pdev) > } > EXPORT_SYMBOL(pci_biosrom_size); > > +void snp_kexec_unprep_rom_memory(void) > +{ > + unsigned long vaddr, npages, sz; > + > + /* > + * Switch back ROM regions to shared so that their validation > + * does not fail during kexec kernel boot. > + */ > + vaddr = (unsigned long)__va(video_rom_resource.start); > + sz = (system_rom_resource.end + 1) - video_rom_resource.start; > + npages = PAGE_ALIGN(sz) >> PAGE_SHIFT; > + > + snp_set_memory_shared(vaddr, npages); > +} > +EXPORT_SYMBOL(snp_kexec_unprep_rom_memory); > + > #define ROMSIGNATURE 0xaa55 > > static int __init romsignature(const unsigned char *rom) > diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c > index c67285824e82..765ab83129eb 100644 > --- a/arch/x86/kernel/sev.c > +++ b/arch/x86/kernel/sev.c > @@ -23,6 +23,9 @@ > #include <linux/platform_device.h> > #include <linux/io.h> > #include <linux/psp-sev.h> > +#include <linux/pagewalk.h> > +#include <linux/cacheflush.h> > +#include <linux/delay.h> > #include <uapi/linux/sev-guest.h> > > #include <asm/cpu_entry_area.h> > @@ -40,6 +43,7 @@ > #include <asm/apic.h> > #include <asm/cpuid.h> > #include <asm/cmdline.h> > +#include <asm/probe_roms.h> > > #define DR7_RESET_VALUE 0x400 > > @@ -71,6 +75,13 @@ static struct ghcb *boot_ghcb __section(".data"); > /* Bitmap of SEV features supported by the hypervisor */ > static u64 sev_hv_features __ro_after_init; > > +/* Last address to be switched to private during kexec */ > +static unsigned long last_address_shd_kexec; Maybe kexec_last_address_to_make_private ? Or just something that makes a bit more sense. > + > +static bool crash_requested; > +atomic_t conversions_in_progress; > +bool conversion_allowed = true; > + > /* #VC handler runtime per-CPU data */ > struct sev_es_runtime_data { > struct ghcb ghcb_page; > @@ -906,6 +917,206 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end) > set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); > } > > +static inline bool pte_decrypted(pte_t pte) > +{ > + return cc_mkdec(pte_val(pte)) == pte_val(pte); > +} > + This is duplicated in TDX code, arch/x86/coco/tdx/tdx.c, looks like something that can go in a header file, maybe mem_encrypt.h. > +static int set_pte_enc(pte_t *kpte, int level, void *va) > +{ > + pgprot_t old_prot, new_prot; > + unsigned long pfn, pa, size; > + pte_t new_pte; > + > + pfn = pg_level_to_pfn(level, kpte, &old_prot); > + if (!pfn) > + return 0; Not sure this matters... a zero PFN is a valid PFN, it's just marked not present. This seems a bit overdone to me, see the end of this function to see if the more compact version works. > + > + new_prot = old_prot; > + pgprot_val(new_prot) |= _PAGE_ENC; > + pa = pfn << PAGE_SHIFT; > + size = page_level_size(level); > + > + /* > + * Change the physical page attribute from C=0 to C=1. Flush the > + * caches to ensure that data gets accessed with the correct C-bit. > + */ > + clflush_cache_range(va, size); > + > + /* Change the page encryption mask. */ > + new_pte = pfn_pte(pfn, new_prot); > + set_pte_atomic(kpte, new_pte); > + > + return 1; > +} static bool set_pte_enc(pte_t *kpte, int level, void *va) { pte_t new_pte; if (pte_none(*kpte)) return false; if (pte_present(*kpte)) clflush_cache_range(va, page_leve_size(level); new_pte = cc_mkenc(*kpte); set_pte_atomic(kpte, new_pte); return true; } > + > +static int unshare_pte(pte_t *pte, unsigned long addr, int pages, int level) Maybe a better name is make_pte_private ? And if you are only returning 0 or 1, it begs to be a bool. > +{ > + struct sev_es_runtime_data *data; > + struct ghcb *ghcb; > + > + data = this_cpu_read(runtime_data); > + ghcb = &data->ghcb_page; > + > + /* > + * check for GHCB for being part of a PMD range. > + */ Single line comment. > + if ((unsigned long)ghcb >= addr && > + (unsigned long)ghcb <= (addr + (pages * PAGE_SIZE))) { > + /* > + * setup last address to be made private so that this GHCB > + * is made private at the end of unshared loop so that RMP > + * does not possibly getting PSMASHed from using the > + * MSR protocol. > + */ Please clarify this comment a bit more... it's a bit hard to follow. > + pr_debug("setting boot_ghcb to NULL for this cpu ghcb\n"); > + last_address_shd_kexec = addr; > + return 1; > + } Add a blank line here. > + if (!set_pte_enc(pte, level, (void *)addr)) > + return 0; Add a blank line here. > + snp_set_memory_private(addr, pages); > + > + return 1; > +} > + > +static void unshare_all_memory(bool unmap) Unused input, looks like this can be removed. > +{ > + unsigned long addr, end; > + > + /* > + * Walk direct mapping and convert all shared memory back to private, > + */ > + > + addr = PAGE_OFFSET; > + end = PAGE_OFFSET + get_max_mapped(); > + > + while (addr < end) { > + unsigned long size; > + unsigned int level; > + pte_t *pte; > + > + pte = lookup_address(addr, &level); > + size = page_level_size(level); > + > + /* > + * pte_none() check is required to skip physical memory holes in direct mapped. > + */ > + if (pte && pte_decrypted(*pte) && !pte_none(*pte)) { > + int pages = size / PAGE_SIZE; > + > + if (!unshare_pte(pte, addr, pages, level)) { > + pr_err("Failed to unshare range %#lx-%#lx\n", > + addr, addr + size); > + } > + > + } > + > + addr += size; > + } > + __flush_tlb_all(); This is also mostly in the TDX code and begs to be made common and not copied... please figure out a way to do the "different" things through a registered callback or such. > + > +} > + > +static void unshare_all_bss_decrypted_memory(void) > +{ > + unsigned long vaddr, vaddr_end; > + unsigned long size; > + unsigned int level; > + unsigned int npages; > + pte_t *pte; > + > + vaddr = (unsigned long)__start_bss_decrypted; > + vaddr_end = (unsigned long)__start_bss_decrypted_unused; > + npages = (vaddr_end - vaddr) >> PAGE_SHIFT; > + for (; vaddr < vaddr_end; vaddr += PAGE_SIZE) { > + pte = lookup_address(vaddr, &level); > + if (!pte || !pte_decrypted(*pte) || pte_none(*pte)) > + continue; > + > + size = page_level_size(level); > + set_pte_enc(pte, level, (void *)vaddr); > + } > + vaddr = (unsigned long)__start_bss_decrypted; > + snp_set_memory_private(vaddr, npages); > +} > + > +void snp_kexec_stop_conversion(bool crash) > +{ > + /* Stop new private<->shared conversions */ > + conversion_allowed = false; > + crash_requested = crash; > + > + /* > + * Make sure conversion_allowed is cleared before checking > + * conversions_in_progress. > + */ > + barrier(); This should be smp_wmb(). > + > + /* > + * Crash kernel reaches here with interrupts disabled: can't wait for > + * conversions to finish. > + * > + * If race happened, just report and proceed. > + */ > + if (!crash) { > + unsigned long timeout; > + > + /* > + * Wait for in-flight conversions to complete. > + * > + * Do not wait more than 30 seconds. > + */ > + timeout = 30 * USEC_PER_SEC; > + while (atomic_read(&conversions_in_progress) && timeout--) > + udelay(1); > + } > + > + if (atomic_read(&conversions_in_progress)) > + pr_warn("Failed to finish shared<->private conversions\n"); > +} Again, same code as in TDX (except for the crash_requested, but I don't see that used anywhere), so please make it common. > + > +void snp_kexec_unshare_mem(void) > +{ > + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) > + return; > + > + /* > + * Switch back any specific memory regions such as option > + * ROM regions back to shared so that (re)validation does > + * not fail when kexec kernel boots. > + */ > + snp_kexec_unprep_rom_memory(); > + > + unshare_all_memory(true); > + > + unshare_all_bss_decrypted_memory(); > + > + if (last_address_shd_kexec) { > + unsigned long size; > + unsigned int level; > + pte_t *pte; > + > + /* > + * Switch to using the MSR protocol to change this cpu's > + * GHCB to private. > + */ > + boot_ghcb = NULL; > + /* > + * All the per-cpu GHCBs have been switched back to private, > + * so can't do any more GHCB calls to the hypervisor beyond > + * this point till the kexec kernel starts running. > + */ > + sev_cfg.ghcbs_initialized = false; Maybe combine the two comments above into a single comment and then keep the two assignments together. > + > + pr_debug("boot ghcb 0x%lx\n", last_address_shd_kexec); > + pte = lookup_address(last_address_shd_kexec, &level); > + size = page_level_size(level); > + set_pte_enc(pte, level, (void *)last_address_shd_kexec); > + snp_set_memory_private(last_address_shd_kexec, (size / PAGE_SIZE)); > + } > +} > + > static int snp_set_vmsa(void *va, bool vmsa) > { > u64 attrs; > diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c > index d314e577836d..87b6475358ad 100644 > --- a/arch/x86/mm/mem_encrypt_amd.c > +++ b/arch/x86/mm/mem_encrypt_amd.c > @@ -214,7 +214,7 @@ void __init sme_map_bootdata(char *real_mode_data) > __sme_early_map_unmap_mem(__va(cmdline_paddr), COMMAND_LINE_SIZE, true); > } > > -static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot) > +unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot) This change shouldn't be needed anymore if you modify the set_pte_enc() function. > { > unsigned long pfn = 0; > pgprot_t prot; > @@ -285,6 +285,17 @@ static void enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc) > > static int amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc) > { > + atomic_inc(&conversions_in_progress); > + > + /* > + * Check after bumping conversions_in_progress to serialize > + * against snp_kexec_stop_conversion(). > + */ > + if (!conversion_allowed) { > + atomic_dec(&conversions_in_progress); > + return -EBUSY; > + } Duplicate code, please move to a common file, along with the varialbles, such as arch/x86/mm/mem_encrypt.c ? > + > /* > * To maintain the security guarantees of SEV-SNP guests, make sure > * to invalidate the memory before encryption attribute is cleared. > @@ -308,6 +319,8 @@ static int amd_enc_status_change_finish(unsigned long vaddr, int npages, bool en > if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) > enc_dec_hypercall(vaddr, npages << PAGE_SHIFT, enc); > > + atomic_dec(&conversions_in_progress); Ditto here. Thanks, Tom > + > return 0; > } > > @@ -468,6 +481,9 @@ void __init sme_early_init(void) > x86_platform.guest.enc_tlb_flush_required = amd_enc_tlb_flush_required; > x86_platform.guest.enc_cache_flush_required = amd_enc_cache_flush_required; > > + x86_platform.guest.enc_kexec_stop_conversion = snp_kexec_stop_conversion; > + x86_platform.guest.enc_kexec_unshare_mem = snp_kexec_unshare_mem; > + > /* > * AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the > * parallel bringup low level code. That raises #VC which cannot be _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec 2024-02-21 20:35 ` Tom Lendacky @ 2024-02-22 10:50 ` Kirill A. Shutemov 2024-02-22 13:58 ` Tom Lendacky 0 siblings, 1 reply; 39+ messages in thread From: Kirill A. Shutemov @ 2024-02-22 10:50 UTC (permalink / raw) To: Tom Lendacky Cc: Ashish Kalra, tglx, mingo, bp, dave.hansen, luto, x86, ardb, hpa, linux-efi, linux-kernel, rafael, peterz, adrian.hunter, sathyanarayanan.kuppuswamy, elena.reshetova, jun.nakajima, rick.p.edgecombe, seanjc, kai.huang, bhe, kexec, linux-coco, anisinha, michael.roth, bdas, vkuznets, dionnaglaze, jroedel, ashwin.kamat On Wed, Feb 21, 2024 at 02:35:13PM -0600, Tom Lendacky wrote: > > @@ -906,6 +917,206 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end) > > set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); > > } > > +static inline bool pte_decrypted(pte_t pte) > > +{ > > + return cc_mkdec(pte_val(pte)) == pte_val(pte); > > +} > > + > > This is duplicated in TDX code, arch/x86/coco/tdx/tdx.c, looks like > something that can go in a header file, maybe mem_encrypt.h. > I think <asm/pgtable.h> is a better fit. > > +void snp_kexec_stop_conversion(bool crash) > > +{ > > + /* Stop new private<->shared conversions */ > > + conversion_allowed = false; > > + crash_requested = crash; > > + > > + /* > > + * Make sure conversion_allowed is cleared before checking > > + * conversions_in_progress. > > + */ > > + barrier(); > > This should be smp_wmb(). > Why? -- Kiryl Shutsemau / Kirill A. Shutemov _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec 2024-02-22 10:50 ` Kirill A. Shutemov @ 2024-02-22 13:58 ` Tom Lendacky 0 siblings, 0 replies; 39+ messages in thread From: Tom Lendacky @ 2024-02-22 13:58 UTC (permalink / raw) To: Kirill A. Shutemov Cc: Ashish Kalra, tglx, mingo, bp, dave.hansen, luto, x86, ardb, hpa, linux-efi, linux-kernel, rafael, peterz, adrian.hunter, sathyanarayanan.kuppuswamy, elena.reshetova, jun.nakajima, rick.p.edgecombe, seanjc, kai.huang, bhe, kexec, linux-coco, anisinha, michael.roth, bdas, vkuznets, dionnaglaze, jroedel, ashwin.kamat On 2/22/24 04:50, Kirill A. Shutemov wrote: > On Wed, Feb 21, 2024 at 02:35:13PM -0600, Tom Lendacky wrote: >>> @@ -906,6 +917,206 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end) >>> set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE); >>> } >>> +static inline bool pte_decrypted(pte_t pte) >>> +{ >>> + return cc_mkdec(pte_val(pte)) == pte_val(pte); >>> +} >>> + >> >> This is duplicated in TDX code, arch/x86/coco/tdx/tdx.c, looks like >> something that can go in a header file, maybe mem_encrypt.h. >> > > I think <asm/pgtable.h> is a better fit. > >>> +void snp_kexec_stop_conversion(bool crash) >>> +{ >>> + /* Stop new private<->shared conversions */ >>> + conversion_allowed = false; >>> + crash_requested = crash; >>> + >>> + /* >>> + * Make sure conversion_allowed is cleared before checking >>> + * conversions_in_progress. >>> + */ >>> + barrier(); >> >> This should be smp_wmb(). >> > > Why? IIUC, this is because conversions_in_progress can be set on another thread and so this needs an smp barrier. In this case, smp_wmb() just ends up being barrier(), but to me it is clearer this way. Just my opinion, though. Thanks, Tom > _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
[parent not found: <20240212104448.2589568-6-kirill.shutemov@linux.intel.com>]
* Re: [PATCHv7 05/16] x86/kexec: Keep CR4.MCE set during kexec for TDX guest [not found] ` <20240212104448.2589568-6-kirill.shutemov@linux.intel.com> @ 2024-02-22 22:04 ` Thomas Gleixner 0 siblings, 0 replies; 39+ messages in thread From: Thomas Gleixner @ 2024-02-22 22:04 UTC (permalink / raw) To: Kirill A. Shutemov, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel, Kirill A. Shutemov On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote: > TDX guests are not allowed to clear CR4.MCE. Attempt to clear it leads > to #VE. > > Use alternatives to keep the flag during kexec for TDX guests. > > The change doesn't affect non-TDX-guest environments. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Reviewed-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
[parent not found: <20240212104448.2589568-15-kirill.shutemov@linux.intel.com>]
* Re: [PATCHv7 14/16] x86/smp: Add smp_ops.stop_this_cpu() callback [not found] ` <20240212104448.2589568-15-kirill.shutemov@linux.intel.com> @ 2024-02-23 10:26 ` Thomas Gleixner 0 siblings, 0 replies; 39+ messages in thread From: Thomas Gleixner @ 2024-02-23 10:26 UTC (permalink / raw) To: Kirill A. Shutemov, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel, Kirill A. Shutemov On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote: > If the helper is defined, it is called instead of halt() to stop the CPU > at the end of stop_this_cpu() and on crash CPU shutdown. > > ACPI MADT will use it to hand over the CPU to BIOS in order to be able > to wake it up again after kexec. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Acked-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
[parent not found: <20240212104448.2589568-13-kirill.shutemov@linux.intel.com>]
* Re: [PATCHv7 12/16] x86/acpi: Rename fields in acpi_madt_multiproc_wakeup structure [not found] ` <20240212104448.2589568-13-kirill.shutemov@linux.intel.com> @ 2024-02-23 10:27 ` Thomas Gleixner 0 siblings, 0 replies; 39+ messages in thread From: Thomas Gleixner @ 2024-02-23 10:27 UTC (permalink / raw) To: Kirill A. Shutemov, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel, Kirill A. Shutemov On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote: > To prepare for the addition of support for MADT wakeup structure version > 1, it is necessary to provide more appropriate names for the fields in > the structure. > > The field 'mailbox_version' renamed as 'version'. This field signifies > the version of the structure and the related protocols, rather than the > version of the mailbox. This field has not been utilized in the code > thus far. > > The field 'base_address' renamed as 'mailbox_address' to clarify the > kind of address it represents. In version 1, the structure includes the > reset vector address. Clear and distinct naming helps to prevent any > confusion. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Reviewed-by: Kai Huang <kai.huang@intel.com> > Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
[parent not found: <20240212104448.2589568-14-kirill.shutemov@linux.intel.com>]
* Re: [PATCHv7 13/16] x86/acpi: Do not attempt to bring up secondary CPUs in kexec case [not found] ` <20240212104448.2589568-14-kirill.shutemov@linux.intel.com> @ 2024-02-23 10:28 ` Thomas Gleixner 0 siblings, 0 replies; 39+ messages in thread From: Thomas Gleixner @ 2024-02-23 10:28 UTC (permalink / raw) To: Kirill A. Shutemov, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel, Kirill A. Shutemov On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote: > ACPI MADT doesn't allow to offline a CPU after it was onlined. This > limits kexec: the second kernel won't be able to use more than one CPU. > > To prevent a kexec kernel from onlining secondary CPUs invalidate the > mailbox address in the ACPI MADT wakeup structure which prevents a > kexec kernel to use it. > > This is safe as the booting kernel has the mailbox address cached > already and acpi_wakeup_cpu() uses the cached value to bring up the > secondary CPUs. > > Note: This is a Linux specific convention and not covered by the > ACPI specification. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Reviewed-by: Kai Huang <kai.huang@intel.com> > Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
[parent not found: <20240212104448.2589568-17-kirill.shutemov@linux.intel.com>]
* Re: [PATCHv7 16/16] x86/acpi: Add support for CPU offlining for ACPI MADT wakeup method [not found] ` <20240212104448.2589568-17-kirill.shutemov@linux.intel.com> @ 2024-02-23 10:31 ` Thomas Gleixner 0 siblings, 0 replies; 39+ messages in thread From: Thomas Gleixner @ 2024-02-23 10:31 UTC (permalink / raw) To: Kirill A. Shutemov, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel, Kirill A. Shutemov On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote: > MADT Multiprocessor Wakeup structure version 1 brings support of CPU > offlining: BIOS provides a reset vector where the CPU has to jump to > for offlining itself. The new TEST mailbox command can be used to test > whether the CPU offlined itself which means the BIOS has control over > the CPU and can online it again via the ACPI MADT wakeup method. > > Add CPU offling support for the ACPI MADT wakeup method by implementing > custom cpu_die(), play_dead() and stop_this_cpu() SMP operations. > > CPU offlining makes is possible to hand over secondary CPUs over kexec, > not limiting the second kernel to a single CPU. > > The change conforms to the approved ACPI spec change proposal. See the > Link. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Link: https://lore.kernel.org/all/13356251.uLZWGnKmhe@kreacher > Acked-by: Kai Huang <kai.huang@intel.com> > Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
[parent not found: <20240212104448.2589568-3-kirill.shutemov@linux.intel.com>]
* Re: [PATCHv7 02/16] x86/apic: Mark acpi_mp_wake_* variables as __ro_after_init [not found] ` <20240212104448.2589568-3-kirill.shutemov@linux.intel.com> @ 2024-02-19 4:46 ` Baoquan He 2024-02-23 10:31 ` Thomas Gleixner 1 sibling, 0 replies; 39+ messages in thread From: Baoquan He @ 2024-02-19 4:46 UTC (permalink / raw) To: Kirill A. Shutemov Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On 02/12/24 at 12:44pm, Kirill A. Shutemov wrote: > acpi_mp_wake_mailbox_paddr and acpi_mp_wake_mailbox initialized once > during ACPI MADT init and never changed. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Acked-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Baoquan He <bhe@redhat.com> > --- > arch/x86/kernel/acpi/madt_wakeup.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kernel/acpi/madt_wakeup.c b/arch/x86/kernel/acpi/madt_wakeup.c > index 7f164d38bd0b..cf79ea6f3007 100644 > --- a/arch/x86/kernel/acpi/madt_wakeup.c > +++ b/arch/x86/kernel/acpi/madt_wakeup.c > @@ -6,10 +6,10 @@ > #include <asm/processor.h> > > /* Physical address of the Multiprocessor Wakeup Structure mailbox */ > -static u64 acpi_mp_wake_mailbox_paddr; > +static u64 acpi_mp_wake_mailbox_paddr __ro_after_init; > > /* Virtual address of the Multiprocessor Wakeup Structure mailbox */ > -static struct acpi_madt_multiproc_wakeup_mailbox *acpi_mp_wake_mailbox; > +static struct acpi_madt_multiproc_wakeup_mailbox *acpi_mp_wake_mailbox __ro_after_init; > > static int acpi_wakeup_cpu(u32 apicid, unsigned long start_ip) > { > -- > 2.43.0 > _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 02/16] x86/apic: Mark acpi_mp_wake_* variables as __ro_after_init [not found] ` <20240212104448.2589568-3-kirill.shutemov@linux.intel.com> 2024-02-19 4:46 ` [PATCHv7 02/16] x86/apic: Mark acpi_mp_wake_* variables as __ro_after_init Baoquan He @ 2024-02-23 10:31 ` Thomas Gleixner 1 sibling, 0 replies; 39+ messages in thread From: Thomas Gleixner @ 2024-02-23 10:31 UTC (permalink / raw) To: Kirill A. Shutemov, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel, Kirill A. Shutemov On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote: > acpi_mp_wake_mailbox_paddr and acpi_mp_wake_mailbox initialized once > during ACPI MADT init and never changed. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Acked-by: Kai Huang <kai.huang@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
[parent not found: <20240212104448.2589568-7-kirill.shutemov@linux.intel.com>]
* Re: [PATCHv7 06/16] x86/mm: Make x86_platform.guest.enc_status_change_*() return errno [not found] ` <20240212104448.2589568-7-kirill.shutemov@linux.intel.com> @ 2024-02-23 18:26 ` Dave Hansen 0 siblings, 0 replies; 39+ messages in thread From: Dave Hansen @ 2024-02-23 18:26 UTC (permalink / raw) To: Kirill A. Shutemov, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel On 2/12/24 02:44, Kirill A. Shutemov wrote: > TDX is going to have more than one reason to fail > enc_status_change_prepare(). > > Change the callback to return errno instead of assuming -EIO; > enc_status_change_finish() changed too to keep the interface symmetric. Good riddance to the bools. Reviewed-by: Dave Hansen <dave.hansen@intel.com> _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
[parent not found: <20240212104448.2589568-8-kirill.shutemov@linux.intel.com>]
* Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none [not found] ` <20240212104448.2589568-8-kirill.shutemov@linux.intel.com> @ 2024-02-19 5:12 ` Baoquan He 2024-02-19 13:52 ` Kirill A. Shutemov 2024-02-23 18:45 ` Dave Hansen 1 sibling, 1 reply; 39+ messages in thread From: Baoquan He @ 2024-02-19 5:12 UTC (permalink / raw) To: Kirill A. Shutemov Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On 02/12/24 at 12:44pm, Kirill A. Shutemov wrote: > lookup_address() only returns correct page table level for the entry if > the entry is not none. > > Make the helper to always return correct 'level'. It allows to implement > iterator over kernel page tables using lookup_address(). > > Add one more entry into enum pg_level to indicate size of VA covered by > one PGD entry in 5-level paging mode. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > --- > arch/x86/include/asm/pgtable_types.h | 1 + > arch/x86/mm/pat/set_memory.c | 8 ++++---- > 2 files changed, 5 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h > index 0b748ee16b3d..3f648ffdfbe5 100644 > --- a/arch/x86/include/asm/pgtable_types.h > +++ b/arch/x86/include/asm/pgtable_types.h > @@ -548,6 +548,7 @@ enum pg_level { > PG_LEVEL_2M, > PG_LEVEL_1G, > PG_LEVEL_512G, > + PG_LEVEL_256T, > PG_LEVEL_NUM > }; > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index f92da8c9a86d..3612e3167147 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -666,32 +666,32 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, LGTM, Reviewed-by: Baoquan He <bhe@redhat.com> By the way, we may need update the code comment above function lookup_address_in_pgd() and function lookup_address() since they don't reflect the latest behaviour of them. > pud_t *pud; > pmd_t *pmd; > > - *level = PG_LEVEL_NONE; > + *level = PG_LEVEL_256T; > > if (pgd_none(*pgd)) > return NULL; > > + *level = PG_LEVEL_512G; > p4d = p4d_offset(pgd, address); > if (p4d_none(*p4d)) > return NULL; > > - *level = PG_LEVEL_512G; > if (p4d_large(*p4d) || !p4d_present(*p4d)) > return (pte_t *)p4d; > > + *level = PG_LEVEL_1G; > pud = pud_offset(p4d, address); > if (pud_none(*pud)) > return NULL; > > - *level = PG_LEVEL_1G; > if (pud_large(*pud) || !pud_present(*pud)) > return (pte_t *)pud; > > + *level = PG_LEVEL_2M; > pmd = pmd_offset(pud, address); > if (pmd_none(*pmd)) > return NULL; > > - *level = PG_LEVEL_2M; > if (pmd_large(*pmd) || !pmd_present(*pmd)) > return (pte_t *)pmd; > > -- > 2.43.0 > _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none 2024-02-19 5:12 ` [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none Baoquan He @ 2024-02-19 13:52 ` Kirill A. Shutemov 2024-02-20 10:25 ` Baoquan He 0 siblings, 1 reply; 39+ messages in thread From: Kirill A. Shutemov @ 2024-02-19 13:52 UTC (permalink / raw) To: Baoquan He Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On Mon, Feb 19, 2024 at 01:12:32PM +0800, Baoquan He wrote: > On 02/12/24 at 12:44pm, Kirill A. Shutemov wrote: > > lookup_address() only returns correct page table level for the entry if > > the entry is not none. > > > > Make the helper to always return correct 'level'. It allows to implement > > iterator over kernel page tables using lookup_address(). > > > > Add one more entry into enum pg_level to indicate size of VA covered by > > one PGD entry in 5-level paging mode. > > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > > Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > > --- > > arch/x86/include/asm/pgtable_types.h | 1 + > > arch/x86/mm/pat/set_memory.c | 8 ++++---- > > 2 files changed, 5 insertions(+), 4 deletions(-) > > > > diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h > > index 0b748ee16b3d..3f648ffdfbe5 100644 > > --- a/arch/x86/include/asm/pgtable_types.h > > +++ b/arch/x86/include/asm/pgtable_types.h > > @@ -548,6 +548,7 @@ enum pg_level { > > PG_LEVEL_2M, > > PG_LEVEL_1G, > > PG_LEVEL_512G, > > + PG_LEVEL_256T, > > PG_LEVEL_NUM > > }; > > > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > > index f92da8c9a86d..3612e3167147 100644 > > --- a/arch/x86/mm/pat/set_memory.c > > +++ b/arch/x86/mm/pat/set_memory.c > > @@ -666,32 +666,32 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, > > LGTM, > > Reviewed-by: Baoquan He <bhe@redhat.com> > > By the way, we may need update the code comment above function > lookup_address_in_pgd() and function lookup_address() since they don't > reflect the latest behaviour of them. I am not sure what part of the comment you see doesn't reflect the behaviour. From my PoV, changed code matches the comment closer that original. Hm? -- Kiryl Shutsemau / Kirill A. Shutemov _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none 2024-02-19 13:52 ` Kirill A. Shutemov @ 2024-02-20 10:25 ` Baoquan He 2024-02-20 12:36 ` Kirill A. Shutemov 0 siblings, 1 reply; 39+ messages in thread From: Baoquan He @ 2024-02-20 10:25 UTC (permalink / raw) To: Kirill A. Shutemov Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On 02/19/24 at 03:52pm, Kirill A. Shutemov wrote: > On Mon, Feb 19, 2024 at 01:12:32PM +0800, Baoquan He wrote: > > On 02/12/24 at 12:44pm, Kirill A. Shutemov wrote: > > > lookup_address() only returns correct page table level for the entry if > > > the entry is not none. > > > > > > Make the helper to always return correct 'level'. It allows to implement > > > iterator over kernel page tables using lookup_address(). > > > > > > Add one more entry into enum pg_level to indicate size of VA covered by > > > one PGD entry in 5-level paging mode. > > > > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > > > Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > > > --- > > > arch/x86/include/asm/pgtable_types.h | 1 + > > > arch/x86/mm/pat/set_memory.c | 8 ++++---- > > > 2 files changed, 5 insertions(+), 4 deletions(-) > > > > > > diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h > > > index 0b748ee16b3d..3f648ffdfbe5 100644 > > > --- a/arch/x86/include/asm/pgtable_types.h > > > +++ b/arch/x86/include/asm/pgtable_types.h > > > @@ -548,6 +548,7 @@ enum pg_level { > > > PG_LEVEL_2M, > > > PG_LEVEL_1G, > > > PG_LEVEL_512G, > > > + PG_LEVEL_256T, > > > PG_LEVEL_NUM > > > }; > > > > > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > > > index f92da8c9a86d..3612e3167147 100644 > > > --- a/arch/x86/mm/pat/set_memory.c > > > +++ b/arch/x86/mm/pat/set_memory.c > > > @@ -666,32 +666,32 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, > > > > LGTM, > > > > Reviewed-by: Baoquan He <bhe@redhat.com> > > > > By the way, we may need update the code comment above function > > lookup_address_in_pgd() and function lookup_address() since they don't > > reflect the latest behaviour of them. > > I am not sure what part of the comment you see doesn't reflect the > behaviour. From my PoV, changed code matches the comment closer that > original. Oh, I didn't make it clear. I mean update the code comment for lookup_address(), and add code comment for lookup_address_in_pgd() to mention the level thing. Maybe it's a chance to do that. ===1> * * Lookup the page table entry for a virtual address. Return a pointer * to the entry and the level of the mapping. * * Note: We return pud and pmd either when the entry is marked large ~~~~~~~~~~~ seems we return p4d too * or when the present bit is not set. Otherwise we would return a * pointer to a nonexisting mapping. ~~~~~~~~~~~~~~~ NULL? */ pte_t *lookup_address(unsigned long address, unsigned int *level) { return lookup_address_in_pgd(pgd_offset_k(address), address, level); } EXPORT_SYMBOL_GPL(lookup_address); === ===2> /* * Lookup the page table entry for a virtual address in a specific pgd. * Return a pointer to the entry and the level of the mapping. ~~ also could return NULL if it's none entry. And do we need to mention the level thing? */ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, unsigned int *level) ... } _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none 2024-02-20 10:25 ` Baoquan He @ 2024-02-20 12:36 ` Kirill A. Shutemov 2024-02-21 2:37 ` Baoquan He 0 siblings, 1 reply; 39+ messages in thread From: Kirill A. Shutemov @ 2024-02-20 12:36 UTC (permalink / raw) To: Baoquan He Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On Tue, Feb 20, 2024 at 06:25:43PM +0800, Baoquan He wrote: > > I am not sure what part of the comment you see doesn't reflect the > > behaviour. From my PoV, changed code matches the comment closer that > > original. > > Oh, I didn't make it clear. I mean update the code comment for > lookup_address(), and add code comment for lookup_address_in_pgd() to > mention the level thing. Maybe it's a chance to do that. > > ===1> > * > * Lookup the page table entry for a virtual address. Return a pointer > * to the entry and the level of the mapping. > * > * Note: We return pud and pmd either when the entry is marked large > ~~~~~~~~~~~ seems we return p4d too > * or when the present bit is not set. Otherwise we would return a > * pointer to a nonexisting mapping. > ~~~~~~~~~~~~~~~ NULL? > */ > pte_t *lookup_address(unsigned long address, unsigned int *level) > { > return lookup_address_in_pgd(pgd_offset_k(address), address, level); > } > EXPORT_SYMBOL_GPL(lookup_address); > === > > ===2> > /* > * Lookup the page table entry for a virtual address in a specific pgd. > * Return a pointer to the entry and the level of the mapping. > ~~ also could return NULL if it's none entry. And do we need to > mention the level thing? > */ > pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, > unsigned int *level) > ... > } > What about this fixup: diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 3612e3167147..425ff6e192e6 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star /* * Lookup the page table entry for a virtual address in a specific pgd. - * Return a pointer to the entry and the level of the mapping. + * Return a pointer to the entry and the level of the mapping (or NULL if + * the entry is none) and level of the entry. */ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, unsigned int *level) @@ -704,9 +705,8 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, * Lookup the page table entry for a virtual address. Return a pointer * to the entry and the level of the mapping. * - * Note: We return pud and pmd either when the entry is marked large - * or when the present bit is not set. Otherwise we would return a - * pointer to a nonexisting mapping. + * Note: the function returns p4d, pud and pmd either when the entry is marked + * large or when the present bit is not set. Otherwise it returns NULL. */ pte_t *lookup_address(unsigned long address, unsigned int *level) { -- Kiryl Shutsemau / Kirill A. Shutemov _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none 2024-02-20 12:36 ` Kirill A. Shutemov @ 2024-02-21 2:37 ` Baoquan He 2024-02-21 14:15 ` Kirill A. Shutemov 0 siblings, 1 reply; 39+ messages in thread From: Baoquan He @ 2024-02-21 2:37 UTC (permalink / raw) To: Kirill A. Shutemov Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On 02/20/24 at 02:36pm, Kirill A. Shutemov wrote: > On Tue, Feb 20, 2024 at 06:25:43PM +0800, Baoquan He wrote: > > > I am not sure what part of the comment you see doesn't reflect the > > > behaviour. From my PoV, changed code matches the comment closer that > > > original. > > > > Oh, I didn't make it clear. I mean update the code comment for > > lookup_address(), and add code comment for lookup_address_in_pgd() to > > mention the level thing. Maybe it's a chance to do that. > > > > ===1> > > * > > * Lookup the page table entry for a virtual address. Return a pointer > > * to the entry and the level of the mapping. > > * > > * Note: We return pud and pmd either when the entry is marked large > > ~~~~~~~~~~~ seems we return p4d too > > * or when the present bit is not set. Otherwise we would return a > > * pointer to a nonexisting mapping. > > ~~~~~~~~~~~~~~~ NULL? > > */ > > pte_t *lookup_address(unsigned long address, unsigned int *level) > > { > > return lookup_address_in_pgd(pgd_offset_k(address), address, level); > > } > > EXPORT_SYMBOL_GPL(lookup_address); > > === > > > > ===2> > > /* > > * Lookup the page table entry for a virtual address in a specific pgd. > > * Return a pointer to the entry and the level of the mapping. > > ~~ also could return NULL if it's none entry. And do we need to > > mention the level thing? > > */ > > pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, > > unsigned int *level) > > ... > > } > > > > What about this fixup: Some nitpicks. > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index 3612e3167147..425ff6e192e6 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star > > /* > * Lookup the page table entry for a virtual address in a specific pgd. > - * Return a pointer to the entry and the level of the mapping. > + * Return a pointer to the entry and the level of the mapping (or NULL if > + * the entry is none) and level of the entry. ^ this right parenthesis may need be moved to the end. ======= * Return a pointer to the entry and the level of the mapping (or NULL if * the entry is none and level of the entry). ======= > */ > pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, > unsigned int *level) > @@ -704,9 +705,8 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, > * Lookup the page table entry for a virtual address. Return a pointer > * to the entry and the level of the mapping. > * > - * Note: We return pud and pmd either when the entry is marked large > - * or when the present bit is not set. Otherwise we would return a > - * pointer to a nonexisting mapping. > + * Note: the function returns p4d, pud and pmd either when the entry is marked ~~~ ^ s/and/or/ > + * large or when the present bit is not set. Otherwise it returns NULL. > */ > pte_t *lookup_address(unsigned long address, unsigned int *level) > { > -- > Kiryl Shutsemau / Kirill A. Shutemov > _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none 2024-02-21 2:37 ` Baoquan He @ 2024-02-21 14:15 ` Kirill A. Shutemov 2024-02-22 11:01 ` Baoquan He 0 siblings, 1 reply; 39+ messages in thread From: Kirill A. Shutemov @ 2024-02-21 14:15 UTC (permalink / raw) To: Baoquan He Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On Wed, Feb 21, 2024 at 10:37:29AM +0800, Baoquan He wrote: > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > > index 3612e3167147..425ff6e192e6 100644 > > --- a/arch/x86/mm/pat/set_memory.c > > +++ b/arch/x86/mm/pat/set_memory.c > > @@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star > > > > /* > > * Lookup the page table entry for a virtual address in a specific pgd. > > - * Return a pointer to the entry and the level of the mapping. > > + * Return a pointer to the entry and the level of the mapping (or NULL if > > + * the entry is none) and level of the entry. > ^ this right parenthesis may need be moved to the end. > > > ======= > * Return a pointer to the entry and the level of the mapping (or NULL if > * the entry is none and level of the entry). > ======= Emm.. I like my variant more. We return level regardless if the entry none or not. I don't see a reason to repeat it twice. > > */ > > pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, > > unsigned int *level) > > @@ -704,9 +705,8 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address, > > * Lookup the page table entry for a virtual address. Return a pointer > > * to the entry and the level of the mapping. > > * > > - * Note: We return pud and pmd either when the entry is marked large > > - * or when the present bit is not set. Otherwise we would return a > > - * pointer to a nonexisting mapping. > > + * Note: the function returns p4d, pud and pmd either when the entry is marked > ~~~ > ^ s/and/or/ Fair enough. -- Kiryl Shutsemau / Kirill A. Shutemov _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none 2024-02-21 14:15 ` Kirill A. Shutemov @ 2024-02-22 11:01 ` Baoquan He 2024-02-22 14:04 ` Kirill A. Shutemov 0 siblings, 1 reply; 39+ messages in thread From: Baoquan He @ 2024-02-22 11:01 UTC (permalink / raw) To: Kirill A. Shutemov Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On 02/21/24 at 04:15pm, Kirill A. Shutemov wrote: > On Wed, Feb 21, 2024 at 10:37:29AM +0800, Baoquan He wrote: > > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > > > index 3612e3167147..425ff6e192e6 100644 > > > --- a/arch/x86/mm/pat/set_memory.c > > > +++ b/arch/x86/mm/pat/set_memory.c > > > @@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star > > > > > > /* > > > * Lookup the page table entry for a virtual address in a specific pgd. > > > - * Return a pointer to the entry and the level of the mapping. > > > + * Return a pointer to the entry and the level of the mapping (or NULL if > > > + * the entry is none) and level of the entry. > > ^ this right parenthesis may need be moved to the end. > > > > > > ======= > > * Return a pointer to the entry and the level of the mapping (or NULL if > > * the entry is none and level of the entry). > > ======= > > Emm.. I like my variant more. We return level regardless if the entry none > or not. I don't see a reason to repeat it twice. * Lookup the page table entry for a virtual address in a specific pgd. * Return a pointer to the entry and the level of the mapping (or NULL if * the entry is none) and level of the entry. Hmm, I am confused. Why do we need to stress the level of the mapping and level of the entry? Wondering what is the difference. I must miss something. _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none 2024-02-22 11:01 ` Baoquan He @ 2024-02-22 14:04 ` Kirill A. Shutemov 2024-02-22 15:37 ` Baoquan He 0 siblings, 1 reply; 39+ messages in thread From: Kirill A. Shutemov @ 2024-02-22 14:04 UTC (permalink / raw) To: Baoquan He Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On Thu, Feb 22, 2024 at 07:01:41PM +0800, Baoquan He wrote: > On 02/21/24 at 04:15pm, Kirill A. Shutemov wrote: > > On Wed, Feb 21, 2024 at 10:37:29AM +0800, Baoquan He wrote: > > > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > > > > index 3612e3167147..425ff6e192e6 100644 > > > > --- a/arch/x86/mm/pat/set_memory.c > > > > +++ b/arch/x86/mm/pat/set_memory.c > > > > @@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star > > > > > > > > /* > > > > * Lookup the page table entry for a virtual address in a specific pgd. > > > > - * Return a pointer to the entry and the level of the mapping. > > > > + * Return a pointer to the entry and the level of the mapping (or NULL if > > > > + * the entry is none) and level of the entry. > > > ^ this right parenthesis may need be moved to the end. > > > > > > > > > ======= > > > * Return a pointer to the entry and the level of the mapping (or NULL if > > > * the entry is none and level of the entry). > > > ======= > > > > Emm.. I like my variant more. We return level regardless if the entry none > > or not. I don't see a reason to repeat it twice. > > > * Lookup the page table entry for a virtual address in a specific pgd. > * Return a pointer to the entry and the level of the mapping (or NULL if > * the entry is none) and level of the entry. > > Hmm, I am confused. Why do we need to stress the level of the mapping > and level of the entry? Wondering what is the difference. I must miss > something. My bad. This is way I meant to write: * Lookup the page table entry for a virtual address in a specific pgd. * Return a pointer to the entry (or NULL if the entry does not exist) and * the level of the entry. -- Kiryl Shutsemau / Kirill A. Shutemov _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none 2024-02-22 14:04 ` Kirill A. Shutemov @ 2024-02-22 15:37 ` Baoquan He 0 siblings, 0 replies; 39+ messages in thread From: Baoquan He @ 2024-02-22 15:37 UTC (permalink / raw) To: Kirill A. Shutemov Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, kexec, linux-coco, linux-kernel On 02/22/24 at 04:04pm, Kirill A. Shutemov wrote: > On Thu, Feb 22, 2024 at 07:01:41PM +0800, Baoquan He wrote: > > On 02/21/24 at 04:15pm, Kirill A. Shutemov wrote: > > > On Wed, Feb 21, 2024 at 10:37:29AM +0800, Baoquan He wrote: > > > > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > > > > > index 3612e3167147..425ff6e192e6 100644 > > > > > --- a/arch/x86/mm/pat/set_memory.c > > > > > +++ b/arch/x86/mm/pat/set_memory.c > > > > > @@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star > > > > > > > > > > /* > > > > > * Lookup the page table entry for a virtual address in a specific pgd. > > > > > - * Return a pointer to the entry and the level of the mapping. > > > > > + * Return a pointer to the entry and the level of the mapping (or NULL if > > > > > + * the entry is none) and level of the entry. > > > > ^ this right parenthesis may need be moved to the end. > > > > > > > > > > > > ======= > > > > * Return a pointer to the entry and the level of the mapping (or NULL if > > > > * the entry is none and level of the entry). > > > > ======= > > > > > > Emm.. I like my variant more. We return level regardless if the entry none > > > or not. I don't see a reason to repeat it twice. > > > > > > * Lookup the page table entry for a virtual address in a specific pgd. > > * Return a pointer to the entry and the level of the mapping (or NULL if > > * the entry is none) and level of the entry. > > > > Hmm, I am confused. Why do we need to stress the level of the mapping > > and level of the entry? Wondering what is the difference. I must miss > > something. > > My bad. This is way I meant to write: > > * Lookup the page table entry for a virtual address in a specific pgd. > * Return a pointer to the entry (or NULL if the entry does not exist) and > * the level of the entry. ACK. Thanks. _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none [not found] ` <20240212104448.2589568-8-kirill.shutemov@linux.intel.com> 2024-02-19 5:12 ` [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none Baoquan He @ 2024-02-23 18:45 ` Dave Hansen 2024-02-23 18:58 ` Dave Hansen 1 sibling, 1 reply; 39+ messages in thread From: Dave Hansen @ 2024-02-23 18:45 UTC (permalink / raw) To: Kirill A. Shutemov, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel On 2/12/24 02:44, Kirill A. Shutemov wrote: > lookup_address() only returns correct page table level for the entry if > the entry is not none. Currently, lookup_address() returns two things: 1. A "pte_t" (which might be a p[g4um]d_t) 2. The 'level' of the page tables where the "pte_t" was found (returned via a pointer) If no pte_t is found, 'level' is essentially garbage. > Make the helper to always return correct 'level'. It allows to implement > iterator over kernel page tables using lookup_address(). One nit with this description: What's "correct" isn't immediately obvious to me. It wasn't exactly incorrect before. I think it would be better to say: Always fill out the level. For NULL "pte_t"s, fill in the level where the p*d_none() entry was found mirroring the "found" behavior. Always filling out the level allows using lookup_address() to iterate over kernel page tables. > Add one more entry into enum pg_level to indicate size of VA covered by > one PGD entry in 5-level paging mode. Needs some 'the's: Add one more entry into enum pg_level to indicate the size of the VA covered by one PGD entry in 5-level paging mode. With that fixed: Reviewed-by: Dave Hansen <dave.hansen@intel.com> _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none 2024-02-23 18:45 ` Dave Hansen @ 2024-02-23 18:58 ` Dave Hansen 0 siblings, 0 replies; 39+ messages in thread From: Dave Hansen @ 2024-02-23 18:58 UTC (permalink / raw) To: Kirill A. Shutemov, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel On 2/23/24 10:45, Dave Hansen wrote: > Always filling out the level allows using lookup_address() to > iterate over kernel page tables. This doesn't parse very well. How about this instead: Always filling out the level allows using lookup_address() to precisely skip over holes when walking kernel page tables. I think that more accurately captures what you're doing with it in the next patch. _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
[parent not found: <20240212104448.2589568-9-kirill.shutemov@linux.intel.com>]
* Re: [PATCHv7 08/16] x86/tdx: Account shared memory [not found] ` <20240212104448.2589568-9-kirill.shutemov@linux.intel.com> @ 2024-02-23 19:08 ` Dave Hansen 2024-02-25 15:54 ` Kirill A. Shutemov 0 siblings, 1 reply; 39+ messages in thread From: Dave Hansen @ 2024-02-23 19:08 UTC (permalink / raw) To: Kirill A. Shutemov, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel On 2/12/24 02:44, Kirill A. Shutemov wrote: > The kernel will convert all shared memory back to private during kexec. > The direct mapping page tables will provide information on which memory > is shared. > > It is extremely important to convert all shared memory. If a page is > missed, it will cause the second kernel to crash when it accesses it. > > Keep track of the number of shared pages. This will allow for > cross-checking against the shared information in the direct mapping and > reporting if the shared bit is lost. > > Include a debugfs interface that allows for the check to be performed at > any point. When I read this, I thought you were going to do some automatic checking. Could you make it more clear here that it's 100% up to the user to figure out if the numbers in debugfs match and whether there's a problem? This would also be a great place to mention that the whole thing is racy. > +static atomic_long_t nr_shared; > + > +static inline bool pte_decrypted(pte_t pte) > +{ > + return cc_mkdec(pte_val(pte)) == pte_val(pte); > +} Name this pte_is_decrypted(), please. > /* Called from __tdx_hypercall() for unrecoverable failure */ > noinstr void __noreturn __tdx_hypercall_failed(void) > { > @@ -821,6 +829,11 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages, > if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc)) > return -EIO; > > + if (enc) > + atomic_long_sub(numpages, &nr_shared); > + else > + atomic_long_add(numpages, &nr_shared); > + > return 0; > } > > @@ -896,3 +909,59 @@ void __init tdx_early_init(void) > > pr_info("Guest detected\n"); > } > + > +#ifdef CONFIG_DEBUG_FS > +static int tdx_shared_memory_show(struct seq_file *m, void *p) > +{ > + unsigned long addr, end; > + unsigned long found = 0; > + > + addr = PAGE_OFFSET; > + end = PAGE_OFFSET + get_max_mapped(); > + > + while (addr < end) { > + unsigned long size; > + unsigned int level; > + pte_t *pte; > + > + pte = lookup_address(addr, &level); > + size = page_level_size(level); > + > + if (pte && pte_decrypted(*pte)) > + found += size / PAGE_SIZE; > + > + addr += size; > + > + cond_resched(); > + } This is totally racy, right? Nothing prevents the PTE from flip-flopping all over the place. > + seq_printf(m, "Number of shared pages in kernel page tables: %16lu\n", > + found); > + seq_printf(m, "Number of pages accounted as shared: %16ld\n", > + atomic_long_read(&nr_shared)); > + return 0; > +} Ditto with 'nr_shared'. There's nothing to say that the page table walk has anything to do with 'nr_shared' by the time we get down here. That's not _fatal_ for a debug interface, but the pitfalls need to at least be discussed. Better yet would be to make sure this and the cpa code don't stomp on each other. _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 08/16] x86/tdx: Account shared memory 2024-02-23 19:08 ` [PATCHv7 08/16] x86/tdx: Account shared memory Dave Hansen @ 2024-02-25 15:54 ` Kirill A. Shutemov 2024-02-25 17:34 ` Dave Hansen 0 siblings, 1 reply; 39+ messages in thread From: Kirill A. Shutemov @ 2024-02-25 15:54 UTC (permalink / raw) To: Dave Hansen Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel On Fri, Feb 23, 2024 at 11:08:18AM -0800, Dave Hansen wrote: > On 2/12/24 02:44, Kirill A. Shutemov wrote: > > The kernel will convert all shared memory back to private during kexec. > > The direct mapping page tables will provide information on which memory > > is shared. > > > > It is extremely important to convert all shared memory. If a page is > > missed, it will cause the second kernel to crash when it accesses it. > > > > Keep track of the number of shared pages. This will allow for > > cross-checking against the shared information in the direct mapping and > > reporting if the shared bit is lost. > > > > Include a debugfs interface that allows for the check to be performed at > > any point. > > When I read this, I thought you were going to do some automatic > checking. Could you make it more clear here that it's 100% up to the > user to figure out if the numbers in debugfs match and whether there's a > problem? This would also be a great place to mention that the whole > thing is racy. What about this: Include a debugfs interface to dump the number of shared pages in the direct mapping and the expected number. There is no serialization against memory conversion. The numbers might not match if access to the debugfs interface races with the conversion. > > +static atomic_long_t nr_shared; > > + > > +static inline bool pte_decrypted(pte_t pte) > > +{ > > + return cc_mkdec(pte_val(pte)) == pte_val(pte); > > +} > > Name this pte_is_decrypted(), please. But why? pte_decrypted() is consistent with other pte helpers pte_none(), pte_present, pte_dirty(), ... > > /* Called from __tdx_hypercall() for unrecoverable failure */ > > noinstr void __noreturn __tdx_hypercall_failed(void) > > { > > @@ -821,6 +829,11 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages, > > if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc)) > > return -EIO; > > > > + if (enc) > > + atomic_long_sub(numpages, &nr_shared); > > + else > > + atomic_long_add(numpages, &nr_shared); > > + > > return 0; > > } > > > > @@ -896,3 +909,59 @@ void __init tdx_early_init(void) > > > > pr_info("Guest detected\n"); > > } > > + > > +#ifdef CONFIG_DEBUG_FS > > +static int tdx_shared_memory_show(struct seq_file *m, void *p) > > +{ > > + unsigned long addr, end; > > + unsigned long found = 0; > > + > > + addr = PAGE_OFFSET; > > + end = PAGE_OFFSET + get_max_mapped(); > > + > > + while (addr < end) { > > + unsigned long size; > > + unsigned int level; > > + pte_t *pte; > > + > > + pte = lookup_address(addr, &level); > > + size = page_level_size(level); > > + > > + if (pte && pte_decrypted(*pte)) > > + found += size / PAGE_SIZE; > > + > > + addr += size; > > + > > + cond_resched(); > > + } > > This is totally racy, right? Nothing prevents the PTE from > flip-flopping all over the place. Yes. > > + seq_printf(m, "Number of shared pages in kernel page tables: %16lu\n", > > + found); > > + seq_printf(m, "Number of pages accounted as shared: %16ld\n", > > + atomic_long_read(&nr_shared)); > > + return 0; > > +} > > Ditto with 'nr_shared'. There's nothing to say that the page table walk > has anything to do with 'nr_shared' by the time we get down here. > > That's not _fatal_ for a debug interface, but the pitfalls need to at > least be discussed. Better yet would be to make sure this and the cpa > code don't stomp on each other. Serializing is cumbersome here. I can also just drop the interface. -- Kiryl Shutsemau / Kirill A. Shutemov _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 08/16] x86/tdx: Account shared memory 2024-02-25 15:54 ` Kirill A. Shutemov @ 2024-02-25 17:34 ` Dave Hansen 0 siblings, 0 replies; 39+ messages in thread From: Dave Hansen @ 2024-02-25 17:34 UTC (permalink / raw) To: Kirill A. Shutemov Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel On 2/25/24 07:54, Kirill A. Shutemov wrote: > Serializing is cumbersome here. I can also just drop the interface. Just drop it for now. We can come back after the fact and debate how to do the debugging. _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
[parent not found: <20240212104448.2589568-11-kirill.shutemov@linux.intel.com>]
* Re: [PATCHv7 10/16] x86/tdx: Convert shared memory back to private on kexec [not found] ` <20240212104448.2589568-11-kirill.shutemov@linux.intel.com> @ 2024-02-23 19:39 ` Dave Hansen 2024-02-25 14:58 ` Kirill A. Shutemov 0 siblings, 1 reply; 39+ messages in thread From: Dave Hansen @ 2024-02-23 19:39 UTC (permalink / raw) To: Kirill A. Shutemov, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel On 2/12/24 02:44, Kirill A. Shutemov wrote: > +static void tdx_kexec_stop_conversion(bool crash) > +{ > + /* Stop new private<->shared conversions */ > + conversion_allowed = false; > + > + /* > + * Make sure conversion_allowed is cleared before checking > + * conversions_in_progress. > + */ > + barrier(); > + > + /* > + * Crash kernel reaches here with interrupts disabled: can't wait for > + * conversions to finish. > + * > + * If race happened, just report and proceed. > + */ > + if (!crash) { > + unsigned long timeout; > + > + /* > + * Wait for in-flight conversions to complete. > + * > + * Do not wait more than 30 seconds. > + */ > + timeout = 30 * USEC_PER_SEC; > + while (atomic_read(&conversions_in_progress) && timeout--) > + udelay(1); > + } > + > + if (atomic_read(&conversions_in_progress)) > + pr_warn("Failed to finish shared<->private conversions\n"); > +} I'd really prefer we find a way to do this with actual locks, especially 'conversion_allowed'. This is _awfully_ close to being able to be handled by a rwsem where the readers are the converters and tdx_kexec_stop_conversion() takes a write. _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 10/16] x86/tdx: Convert shared memory back to private on kexec 2024-02-23 19:39 ` [PATCHv7 10/16] x86/tdx: Convert shared memory back to private on kexec Dave Hansen @ 2024-02-25 14:58 ` Kirill A. Shutemov 2024-02-26 13:10 ` Kirill A. Shutemov 2024-02-26 13:58 ` Dave Hansen 0 siblings, 2 replies; 39+ messages in thread From: Kirill A. Shutemov @ 2024-02-25 14:58 UTC (permalink / raw) To: Dave Hansen Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel On Fri, Feb 23, 2024 at 11:39:07AM -0800, Dave Hansen wrote: > On 2/12/24 02:44, Kirill A. Shutemov wrote: > > +static void tdx_kexec_stop_conversion(bool crash) > > +{ > > + /* Stop new private<->shared conversions */ > > + conversion_allowed = false; > > + > > + /* > > + * Make sure conversion_allowed is cleared before checking > > + * conversions_in_progress. > > + */ > > + barrier(); > > + > > + /* > > + * Crash kernel reaches here with interrupts disabled: can't wait for > > + * conversions to finish. > > + * > > + * If race happened, just report and proceed. > > + */ > > + if (!crash) { > > + unsigned long timeout; > > + > > + /* > > + * Wait for in-flight conversions to complete. > > + * > > + * Do not wait more than 30 seconds. > > + */ > > + timeout = 30 * USEC_PER_SEC; > > + while (atomic_read(&conversions_in_progress) && timeout--) > > + udelay(1); > > + } > > + > > + if (atomic_read(&conversions_in_progress)) > > + pr_warn("Failed to finish shared<->private conversions\n"); > > +} > > I'd really prefer we find a way to do this with actual locks, especially > 'conversion_allowed'. > > This is _awfully_ close to being able to be handled by a rwsem where the > readers are the converters and tdx_kexec_stop_conversion() takes a write. Okay, here's what I come up with. It needs more testing. Any comments? diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c index fd212c9bad89..5eb0dac33f37 100644 --- a/arch/x86/coco/tdx/tdx.c +++ b/arch/x86/coco/tdx/tdx.c @@ -6,8 +6,10 @@ #include <linux/cpufeature.h> #include <linux/debugfs.h> +#include <linux/delay.h> #include <linux/export.h> #include <linux/io.h> +#include <linux/kexec.h> #include <asm/coco.h> #include <asm/tdx.h> #include <asm/vmx.h> @@ -15,6 +17,7 @@ #include <asm/insn.h> #include <asm/insn-eval.h> #include <asm/pgtable.h> +#include <asm/set_memory.h> /* MMIO direction */ #define EPT_READ 0 @@ -837,6 +840,65 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages, return 0; } +static void tdx_kexec_stop_conversion(bool crash) +{ + /* Stop new private<->shared conversions */ + if (!stop_memory_enc_conversion(!crash)) + pr_warn("Failed to finish shared<->private conversions\n"); +} + +static void tdx_kexec_unshare_mem(void) +{ + unsigned long addr, end; + long found = 0, shared; + + /* + * Walk direct mapping and convert all shared memory back to private, + */ + + addr = PAGE_OFFSET; + end = PAGE_OFFSET + get_max_mapped(); + + while (addr < end) { + unsigned long size; + unsigned int level; + pte_t *pte; + + pte = lookup_address(addr, &level); + size = page_level_size(level); + + if (pte && pte_decrypted(*pte)) { + int pages = size / PAGE_SIZE; + + /* + * Touching memory with shared bit set triggers implicit + * conversion to shared. + * + * Make sure nobody touches the shared range from + * now on. + */ + set_pte(pte, __pte(0)); + + if (!tdx_enc_status_changed(addr, pages, true)) { + pr_err("Failed to unshare range %#lx-%#lx\n", + addr, addr + size); + } + + found += pages; + } + + addr += size; + } + + __flush_tlb_all(); + + shared = atomic_long_read(&nr_shared); + if (shared != found) { + pr_err("shared page accounting is off\n"); + pr_err("nr_shared = %ld, nr_found = %ld\n", shared, found); + } +} + void __init tdx_early_init(void) { struct tdx_module_args args = { @@ -896,6 +958,9 @@ void __init tdx_early_init(void) x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required; x86_platform.guest.enc_tlb_flush_required = tdx_tlb_flush_required; + x86_platform.guest.enc_kexec_stop_conversion = tdx_kexec_stop_conversion; + x86_platform.guest.enc_kexec_unshare_mem = tdx_kexec_unshare_mem; + /* * TDX intercepts the RDMSR to read the X2APIC ID in the parallel * bringup low level code. That raises #VE which cannot be handled diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index a5e89641bd2d..9d4a8e548820 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -48,8 +48,11 @@ int set_memory_wc(unsigned long addr, int numpages); int set_memory_wb(unsigned long addr, int numpages); int set_memory_np(unsigned long addr, int numpages); int set_memory_4k(unsigned long addr, int numpages); + +bool stop_memory_enc_conversion(bool wait); int set_memory_encrypted(unsigned long addr, int numpages); int set_memory_decrypted(unsigned long addr, int numpages); + int set_memory_np_noalias(unsigned long addr, int numpages); int set_memory_nonglobal(unsigned long addr, int numpages); int set_memory_global(unsigned long addr, int numpages); diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 0d2267ad4e0e..e074b2aca970 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2176,12 +2176,32 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) return ret; } +static DECLARE_RWSEM(mem_enc_lock); + +bool stop_memory_enc_conversion(bool wait) +{ + if (!wait) + return down_write_trylock(&mem_enc_lock); + + down_write(&mem_enc_lock); + + return true; +} + static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) { - if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) - return __set_memory_enc_pgtable(addr, numpages, enc); + int ret = 0; - return 0; + if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) { + if (!down_read_trylock(&mem_enc_lock)) + return -EBUSY; + + ret =__set_memory_enc_pgtable(addr, numpages, enc); + + up_read(&mem_enc_lock); + } + + return ret; } int set_memory_encrypted(unsigned long addr, int numpages) -- Kiryl Shutsemau / Kirill A. Shutemov _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCHv7 10/16] x86/tdx: Convert shared memory back to private on kexec 2024-02-25 14:58 ` Kirill A. Shutemov @ 2024-02-26 13:10 ` Kirill A. Shutemov 2024-02-26 13:58 ` Dave Hansen 1 sibling, 0 replies; 39+ messages in thread From: Kirill A. Shutemov @ 2024-02-26 13:10 UTC (permalink / raw) To: Dave Hansen Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel On Sun, Feb 25, 2024 at 04:58:46PM +0200, Kirill A. Shutemov wrote: > On Fri, Feb 23, 2024 at 11:39:07AM -0800, Dave Hansen wrote: > > On 2/12/24 02:44, Kirill A. Shutemov wrote: > > > +static void tdx_kexec_stop_conversion(bool crash) > > > +{ > > > + /* Stop new private<->shared conversions */ > > > + conversion_allowed = false; > > > + > > > + /* > > > + * Make sure conversion_allowed is cleared before checking > > > + * conversions_in_progress. > > > + */ > > > + barrier(); > > > + > > > + /* > > > + * Crash kernel reaches here with interrupts disabled: can't wait for > > > + * conversions to finish. > > > + * > > > + * If race happened, just report and proceed. > > > + */ > > > + if (!crash) { > > > + unsigned long timeout; > > > + > > > + /* > > > + * Wait for in-flight conversions to complete. > > > + * > > > + * Do not wait more than 30 seconds. > > > + */ > > > + timeout = 30 * USEC_PER_SEC; > > > + while (atomic_read(&conversions_in_progress) && timeout--) > > > + udelay(1); > > > + } > > > + > > > + if (atomic_read(&conversions_in_progress)) > > > + pr_warn("Failed to finish shared<->private conversions\n"); > > > +} > > > > I'd really prefer we find a way to do this with actual locks, especially > > 'conversion_allowed'. > > > > This is _awfully_ close to being able to be handled by a rwsem where the > > readers are the converters and tdx_kexec_stop_conversion() takes a write. > > Okay, here's what I come up with. It needs more testing. I don't see a problem during testing. #include <linux/delay.h> has to be dropped, but otherwise the patch is fine to me. Any feedback? -- Kiryl Shutsemau / Kirill A. Shutemov _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCHv7 10/16] x86/tdx: Convert shared memory back to private on kexec 2024-02-25 14:58 ` Kirill A. Shutemov 2024-02-26 13:10 ` Kirill A. Shutemov @ 2024-02-26 13:58 ` Dave Hansen 1 sibling, 0 replies; 39+ messages in thread From: Dave Hansen @ 2024-02-26 13:58 UTC (permalink / raw) To: Kirill A. Shutemov Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel On 2/25/24 06:58, Kirill A. Shutemov wrote: > On Fri, Feb 23, 2024 at 11:39:07AM -0800, Dave Hansen wrote: ... >> I'd really prefer we find a way to do this with actual locks, especially >> 'conversion_allowed'. >> >> This is _awfully_ close to being able to be handled by a rwsem where the >> readers are the converters and tdx_kexec_stop_conversion() takes a write. > > Okay, here's what I come up with. It needs more testing. > > Any comments? Looks a heck of a lot more straightforward to me. > +static void tdx_kexec_stop_conversion(bool crash) > +{ > + /* Stop new private<->shared conversions */ > + if (!stop_memory_enc_conversion(!crash)) > + pr_warn("Failed to finish shared<->private conversions\n"); > +} FWIW, this is one of those places that could use a temporary variable to help explain what is going on: bool wait_for_lock = !crash; /* * ... explain why it doesn't or shouldn't take the lock here */ if (!stop_memory_enc_conversion(wait_for_lock)) ... This makes it understandable without looking at the called function. > +bool stop_memory_enc_conversion(bool wait) > +{ > + if (!wait) > + return down_write_trylock(&mem_enc_lock); > + > + down_write(&mem_enc_lock); > + > + return true; > +} This also needs a comment about why the lock isn't being released. _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
[parent not found: <20240212104448.2589568-12-kirill.shutemov@linux.intel.com>]
* Re: [PATCHv7 11/16] x86/mm: Make e820_end_ram_pfn() cover E820_TYPE_ACPI ranges [not found] ` <20240212104448.2589568-12-kirill.shutemov@linux.intel.com> @ 2024-02-23 19:41 ` Dave Hansen 0 siblings, 0 replies; 39+ messages in thread From: Dave Hansen @ 2024-02-23 19:41 UTC (permalink / raw) To: Kirill A. Shutemov, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86 Cc: Rafael J. Wysocki, Peter Zijlstra, Adrian Hunter, Kuppuswamy Sathyanarayanan, Elena Reshetova, Jun Nakajima, Rick Edgecombe, Tom Lendacky, Kalra, Ashish, Sean Christopherson, Huang, Kai, Baoquan He, kexec, linux-coco, linux-kernel On 2/12/24 02:44, Kirill A. Shutemov wrote: > Despite the name, E820_TYPE_ACPI covers not only ACPI data, but also EFI > tables and might be required by kernel to function properly. Lovely. You learn something new every day. Reviewed-by: Dave Hansen <dave.hansen@intel.com> _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec ^ permalink raw reply [flat|nested] 39+ messages in thread
end of thread, other threads:[~2024-02-26 13:58 UTC | newest]
Thread overview: 39+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20240212104448.2589568-1-kirill.shutemov@linux.intel.com>
[not found] ` <20240212104448.2589568-2-kirill.shutemov@linux.intel.com>
2024-02-19 4:45 ` [PATCHv7 01/16] x86/acpi: Extract ACPI MADT wakeup code into a separate file Baoquan He
2024-02-19 10:08 ` Kirill A. Shutemov
2024-02-19 11:36 ` Baoquan He
2024-02-23 10:32 ` Thomas Gleixner
2024-02-20 1:18 ` [PATCH 0/2] x86/snp: Add kexec support Ashish Kalra
2024-02-20 1:18 ` [PATCH 1/2] x86/mm: Do not zap PMD entry mapping unaccepted memory table during kdump Ashish Kalra
2024-02-20 12:42 ` Kirill A. Shutemov
2024-02-20 19:09 ` Kalra, Ashish
2024-02-20 1:18 ` [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec Ashish Kalra
2024-02-21 20:35 ` Tom Lendacky
2024-02-22 10:50 ` Kirill A. Shutemov
2024-02-22 13:58 ` Tom Lendacky
[not found] ` <20240212104448.2589568-6-kirill.shutemov@linux.intel.com>
2024-02-22 22:04 ` [PATCHv7 05/16] x86/kexec: Keep CR4.MCE set during kexec for TDX guest Thomas Gleixner
[not found] ` <20240212104448.2589568-15-kirill.shutemov@linux.intel.com>
2024-02-23 10:26 ` [PATCHv7 14/16] x86/smp: Add smp_ops.stop_this_cpu() callback Thomas Gleixner
[not found] ` <20240212104448.2589568-13-kirill.shutemov@linux.intel.com>
2024-02-23 10:27 ` [PATCHv7 12/16] x86/acpi: Rename fields in acpi_madt_multiproc_wakeup structure Thomas Gleixner
[not found] ` <20240212104448.2589568-14-kirill.shutemov@linux.intel.com>
2024-02-23 10:28 ` [PATCHv7 13/16] x86/acpi: Do not attempt to bring up secondary CPUs in kexec case Thomas Gleixner
[not found] ` <20240212104448.2589568-17-kirill.shutemov@linux.intel.com>
2024-02-23 10:31 ` [PATCHv7 16/16] x86/acpi: Add support for CPU offlining for ACPI MADT wakeup method Thomas Gleixner
[not found] ` <20240212104448.2589568-3-kirill.shutemov@linux.intel.com>
2024-02-19 4:46 ` [PATCHv7 02/16] x86/apic: Mark acpi_mp_wake_* variables as __ro_after_init Baoquan He
2024-02-23 10:31 ` Thomas Gleixner
[not found] ` <20240212104448.2589568-7-kirill.shutemov@linux.intel.com>
2024-02-23 18:26 ` [PATCHv7 06/16] x86/mm: Make x86_platform.guest.enc_status_change_*() return errno Dave Hansen
[not found] ` <20240212104448.2589568-8-kirill.shutemov@linux.intel.com>
2024-02-19 5:12 ` [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none Baoquan He
2024-02-19 13:52 ` Kirill A. Shutemov
2024-02-20 10:25 ` Baoquan He
2024-02-20 12:36 ` Kirill A. Shutemov
2024-02-21 2:37 ` Baoquan He
2024-02-21 14:15 ` Kirill A. Shutemov
2024-02-22 11:01 ` Baoquan He
2024-02-22 14:04 ` Kirill A. Shutemov
2024-02-22 15:37 ` Baoquan He
2024-02-23 18:45 ` Dave Hansen
2024-02-23 18:58 ` Dave Hansen
[not found] ` <20240212104448.2589568-9-kirill.shutemov@linux.intel.com>
2024-02-23 19:08 ` [PATCHv7 08/16] x86/tdx: Account shared memory Dave Hansen
2024-02-25 15:54 ` Kirill A. Shutemov
2024-02-25 17:34 ` Dave Hansen
[not found] ` <20240212104448.2589568-11-kirill.shutemov@linux.intel.com>
2024-02-23 19:39 ` [PATCHv7 10/16] x86/tdx: Convert shared memory back to private on kexec Dave Hansen
2024-02-25 14:58 ` Kirill A. Shutemov
2024-02-26 13:10 ` Kirill A. Shutemov
2024-02-26 13:58 ` Dave Hansen
[not found] ` <20240212104448.2589568-12-kirill.shutemov@linux.intel.com>
2024-02-23 19:41 ` [PATCHv7 11/16] x86/mm: Make e820_end_ram_pfn() cover E820_TYPE_ACPI ranges Dave Hansen
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox