From: Nikolay Borisov <nik.borisov@suse.com>
To: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
Dave Hansen <dave.hansen@linux.intel.com>,
x86@kernel.org
Cc: "Rafael J. Wysocki" <rafael@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Adrian Hunter <adrian.hunter@intel.com>,
Kuppuswamy Sathyanarayanan
<sathyanarayanan.kuppuswamy@linux.intel.com>,
Elena Reshetova <elena.reshetova@intel.com>,
Jun Nakajima <jun.nakajima@intel.com>,
Rick Edgecombe <rick.p.edgecombe@intel.com>,
Tom Lendacky <thomas.lendacky@amd.com>,
"Kalra, Ashish" <ashish.kalra@amd.com>,
Sean Christopherson <seanjc@google.com>,
"Huang, Kai" <kai.huang@intel.com>, Baoquan He <bhe@redhat.com>,
kexec@lists.infradead.org, linux-coco@lists.linux.dev,
linux-kernel@vger.kernel.org
Subject: Re: [PATCHv5 10/16] x86/tdx: Convert shared memory back to private on kexec
Date: Mon, 15 Jan 2024 12:53:42 +0200 [thread overview]
Message-ID: <89e8722b-661b-4319-8018-06705b366c62@suse.com> (raw)
In-Reply-To: <20231222235209.32143-11-kirill.shutemov@linux.intel.com>
On 23.12.23 г. 1:52 ч., Kirill A. Shutemov wrote:
> TDX guests allocate shared buffers to perform I/O. It is done by
> allocating pages normally from the buddy allocator and converting them
> to shared with set_memory_decrypted().
>
> The second kernel has no idea what memory is converted this way. It only
> sees E820_TYPE_RAM.
>
> Accessing shared memory via private mapping is fatal. It leads to
> unrecoverable TD exit.
>
> On kexec walk direct mapping and convert all shared memory back to
> private. It makes all RAM private again and second kernel may use it
> normally.
>
> The conversion occurs in two steps: stopping new conversions and
> unsharing all memory. In the case of normal kexec, the stopping of
> conversions takes place while scheduling is still functioning. This
> allows for waiting until any ongoing conversions are finished. The
> second step is carried out when all CPUs except one are inactive and
> interrupts are disabled. This prevents any conflicts with code that may
> access shared memory.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> ---
> arch/x86/coco/tdx/tdx.c | 119 +++++++++++++++++++++++++++++++-
> arch/x86/include/asm/x86_init.h | 2 +
> arch/x86/kernel/crash.c | 6 ++
> arch/x86/kernel/reboot.c | 13 ++++
> 4 files changed, 138 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> index 8a49484a2917..5c64db168edd 100644
> --- a/arch/x86/coco/tdx/tdx.c
> +++ b/arch/x86/coco/tdx/tdx.c
> @@ -6,8 +6,10 @@
>
> #include <linux/cpufeature.h>
> #include <linux/debugfs.h>
> +#include <linux/delay.h>
> #include <linux/export.h>
> #include <linux/io.h>
> +#include <linux/kexec.h>
> #include <asm/coco.h>
> #include <asm/tdx.h>
> #include <asm/vmx.h>
> @@ -15,6 +17,7 @@
> #include <asm/insn.h>
> #include <asm/insn-eval.h>
> #include <asm/pgtable.h>
> +#include <asm/set_memory.h>
>
> /* MMIO direction */
> #define EPT_READ 0
> @@ -41,6 +44,9 @@
>
> static atomic_long_t nr_shared;
>
> +static atomic_t conversions_in_progress;
> +static bool conversion_allowed = true;
Given the usage model of this variable, shouldn't it be simply accessed
via READ/WRITE_ONCE macros?
> +
> static inline bool pte_decrypted(pte_t pte)
> {
> return cc_mkdec(pte_val(pte)) == pte_val(pte);
> @@ -726,6 +732,14 @@ static bool tdx_tlb_flush_required(bool private)
>
> static bool tdx_cache_flush_required(void)
> {
> + /*
> + * Avoid issuing CLFLUSH on set_memory_decrypted() if conversions
> + * stopped. Otherwise it can race with unshare_all_memory() and trigger
> + * implicit conversion to shared.
> + */
> + if (!conversion_allowed)
> + return false;
> +
> /*
> * AMD SME/SEV can avoid cache flushing if HW enforces cache coherence.
> * TDX doesn't have such capability.
> @@ -809,12 +823,25 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
> static int tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
> bool enc)
> {
> + atomic_inc(&conversions_in_progress);
> +
> + /*
> + * Check after bumping conversions_in_progress to serialize
> + * against tdx_shutdown().
> + */
> + if (!conversion_allowed) {
> + atomic_dec(&conversions_in_progress);
> + return -EBUSY;
> + }
nit: Can you make the inc of conversions_in_progress be done here, this
eliminated the atomic_dec in case they aren't. Somewhat simplifies the
logic.
> +
> /*
> * Only handle shared->private conversion here.
> * See the comment in tdx_early_init().
> */
> - if (enc && !tdx_enc_status_changed(vaddr, numpages, enc))
> + if (enc && !tdx_enc_status_changed(vaddr, numpages, enc)) {
> + atomic_dec(&conversions_in_progress);
> return -EIO;
> + }
>
> return 0;
> }
> @@ -826,17 +853,102 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
> * Only handle private->shared conversion here.
> * See the comment in tdx_early_init().
> */
> - if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc))
> + if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc)) {
> + atomic_dec(&conversions_in_progress);
> return -EIO;
> + }
>
> if (enc)
> atomic_long_sub(numpages, &nr_shared);
> else
> atomic_long_add(numpages, &nr_shared);
>
> + atomic_dec(&conversions_in_progress);
> +
> return 0;
> }
>
> +static void tdx_kexec_stop_conversion(bool crash)
> +{
> + /* Stop new private<->shared conversions */
> + conversion_allowed = false;
What's the logic behind this compiler barrier?
> + barrier();
> +
> + /*
> + * Crash kernel reaches here with interrupts disabled: can't wait for
> + * conversions to finish.
> + *
> + * If race happened, just report and proceed.
> + */
> + if (!crash) {
> + unsigned long timeout;
> +
> + /*
> + * Wait for in-flight conversions to complete.
> + *
> + * Do not wait more than 30 seconds.
> + */
> + timeout = 30 * USEC_PER_SEC;
> + while (atomic_read(&conversions_in_progress) && timeout--)
> + udelay(1);
> + }
> +
> + if (atomic_read(&conversions_in_progress))
> + pr_warn("Failed to finish shared<->private conversions\n");
> +}
> +
<snip>
> diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
> index c9503fe2d13a..3196ff20a29e 100644
> --- a/arch/x86/include/asm/x86_init.h
> +++ b/arch/x86/include/asm/x86_init.h
> @@ -154,6 +154,8 @@ struct x86_guest {
> int (*enc_status_change_finish)(unsigned long vaddr, int npages, bool enc);
> bool (*enc_tlb_flush_required)(bool enc);
> bool (*enc_cache_flush_required)(void);
> + void (*enc_kexec_stop_conversion)(bool crash);
> + void (*enc_kexec_unshare_mem)(void);
These are only being initialized in the TDX case, but called in all
cases when CC_ATTR_GUEST_MEM_ENCRYPT is true, which includes AMD. So it
would cause a crash, no ? Shouldn't you also introduce noop handlers
initialized in the default x86_platform struct in
arch/x86/kernel/x86_init.c ?
> };
>
> /**
> diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
> index c92d88680dbf..b99bd28ad22f 100644
> --- a/arch/x86/kernel/crash.c
> +++ b/arch/x86/kernel/crash.c
> @@ -40,6 +40,7 @@
> #include <asm/intel_pt.h>
> #include <asm/crash.h>
> #include <asm/cmdline.h>
> +#include <asm/tdx.h>
>
> /* Used while preparing memory map entries for second kernel */
> struct crash_memmap_data {
> @@ -107,6 +108,11 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
>
> crash_smp_send_stop();
>
> + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
> + x86_platform.guest.enc_kexec_stop_conversion(true);
> + x86_platform.guest.enc_kexec_unshare_mem();
> + }
> +
> cpu_emergency_disable_virtualization();
>
> /*
> diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
> index 830425e6d38e..16dde83df49a 100644
> --- a/arch/x86/kernel/reboot.c
> +++ b/arch/x86/kernel/reboot.c
> @@ -12,6 +12,7 @@
> #include <linux/delay.h>
> #include <linux/objtool.h>
> #include <linux/pgtable.h>
> +#include <linux/kexec.h>
> #include <acpi/reboot.h>
> #include <asm/io.h>
> #include <asm/apic.h>
> @@ -31,6 +32,7 @@
> #include <asm/realmode.h>
> #include <asm/x86_init.h>
> #include <asm/efi.h>
> +#include <asm/tdx.h>
>
> /*
> * Power off function, if any
> @@ -716,6 +718,14 @@ static void native_machine_emergency_restart(void)
>
> void native_machine_shutdown(void)
> {
> + /*
> + * Call enc_kexec_stop_conversion() while all CPUs are still active and
> + * interrupts are enabled. This will allow all in-flight memory
> + * conversions to finish cleanly.
> + */
> + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) && kexec_in_progress)
> + x86_platform.guest.enc_kexec_stop_conversion(false);
> +
> /* Stop the cpus and apics */
> #ifdef CONFIG_X86_IO_APIC
> /*
> @@ -752,6 +762,9 @@ void native_machine_shutdown(void)
> #ifdef CONFIG_X86_64
> x86_platform.iommu_shutdown();
> #endif
> +
> + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) && kexec_in_progress)
> + x86_platform.guest.enc_kexec_unshare_mem();
> }
>
> static void __machine_emergency_restart(int emergency)
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
next prev parent reply other threads:[~2024-01-15 10:53 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-22 23:51 [PATCHv5 00/16] x86/tdx: Add kexec support Kirill A. Shutemov
2023-12-22 23:51 ` [PATCHv5 01/16] x86/acpi: Extract ACPI MADT wakeup code into a separate file Kirill A. Shutemov
2023-12-22 23:51 ` [PATCHv5 02/16] x86/apic: Mark acpi_mp_wake_* variables as __ro_after_init Kirill A. Shutemov
2023-12-22 23:51 ` [PATCHv5 03/16] cpu/hotplug: Add support for declaring CPU offlining not supported Kirill A. Shutemov
2023-12-22 23:51 ` [PATCHv5 04/16] cpu/hotplug, x86/acpi: Disable CPU offlining for ACPI MADT wakeup Kirill A. Shutemov
2023-12-22 23:51 ` [PATCHv5 05/16] x86/kvm: Do not try to disable kvmclock if it was not enabled Kirill A. Shutemov
2023-12-22 23:51 ` [PATCHv5 06/16] x86/kexec: Keep CR4.MCE set during kexec for TDX guest Kirill A. Shutemov
2023-12-22 23:51 ` [PATCHv5 07/16] x86/mm: Make x86_platform.guest.enc_status_change_*() return errno Kirill A. Shutemov
2023-12-22 23:52 ` [PATCHv5 08/16] x86/mm: Return correct level from lookup_address() if pte is none Kirill A. Shutemov
2023-12-22 23:52 ` [PATCHv5 09/16] x86/tdx: Account shared memory Kirill A. Shutemov
2023-12-22 23:52 ` [PATCHv5 10/16] x86/tdx: Convert shared memory back to private on kexec Kirill A. Shutemov
2024-01-05 19:38 ` Edgecombe, Rick P
2024-01-06 0:59 ` kirill.shutemov
2024-01-06 1:12 ` Edgecombe, Rick P
2024-01-15 10:53 ` Nikolay Borisov [this message]
2024-01-16 7:28 ` Kirill A. Shutemov
2024-01-16 8:01 ` Nikolay Borisov
2024-01-16 10:45 ` Kirill A. Shutemov
2024-01-16 10:53 ` Kirill A. Shutemov
2023-12-22 23:52 ` [PATCHv5 11/16] x86/mm: Make e820_end_ram_pfn() cover E820_TYPE_ACPI ranges Kirill A. Shutemov
2023-12-22 23:52 ` [PATCHv5 12/16] x86/acpi: Rename fields in acpi_madt_multiproc_wakeup structure Kirill A. Shutemov
2023-12-22 23:52 ` [PATCHv5 13/16] x86/acpi: Do not attempt to bring up secondary CPUs in kexec case Kirill A. Shutemov
2023-12-22 23:52 ` [PATCHv5 14/16] x86/smp: Add smp_ops.stop_this_cpu() callback Kirill A. Shutemov
2023-12-25 8:05 ` [PATCHv5.1 " Kirill A. Shutemov
2024-01-08 3:04 ` Huang, Kai
2024-01-08 10:15 ` kirill.shutemov
2024-01-09 3:04 ` Huang, Kai
2023-12-22 23:52 ` [PATCHv5 15/16] x86/mm: Introduce kernel_ident_mapping_free() Kirill A. Shutemov
2024-01-08 3:13 ` Huang, Kai
2024-01-08 3:30 ` Huang, Kai
2024-01-08 10:17 ` kirill.shutemov
2024-01-08 13:13 ` Huang, Kai
2024-01-08 13:35 ` kirill.shutemov
2023-12-22 23:52 ` [PATCHv5 16/16] x86/acpi: Add support for CPU offlining for ACPI MADT wakeup method Kirill A. Shutemov
2024-01-15 13:19 ` Nikolay Borisov
2024-01-16 6:50 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=89e8722b-661b-4319-8018-06705b366c62@suse.com \
--to=nik.borisov@suse.com \
--cc=adrian.hunter@intel.com \
--cc=ashish.kalra@amd.com \
--cc=bhe@redhat.com \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=elena.reshetova@intel.com \
--cc=jun.nakajima@intel.com \
--cc=kai.huang@intel.com \
--cc=kexec@lists.infradead.org \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-coco@lists.linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rafael@kernel.org \
--cc=rick.p.edgecombe@intel.com \
--cc=sathyanarayanan.kuppuswamy@linux.intel.com \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox