Kexec Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v10 00/38] x86: Secure Memory Encryption (AMD)
@ 2017-07-17 21:09 Tom Lendacky
  2017-07-17 21:10 ` [PATCH v10 31/38] x86/mm, kexec: Allow kexec to be used with SME Tom Lendacky
  2017-07-18 12:03 ` [PATCH v10 00/38] x86: Secure Memory Encryption (AMD) Thomas Gleixner
  0 siblings, 2 replies; 5+ messages in thread
From: Tom Lendacky @ 2017-07-17 21:09 UTC (permalink / raw)
  To: x86, linux-kernel, linux-arch, linux-efi, linux-doc, linux-mm,
	kvm, kasan-dev
  Cc: Brijesh Singh, Toshimitsu Kani, Radim Krčmář,
	Matt Fleming, Alexander Potapenko, H. Peter Anvin, Larry Woodman,
	Jonathan Corbet, Joerg Roedel, Michael S. Tsirkin, Ingo Molnar,
	Andrey Ryabinin, Dave Young, Juergen Gross, Rik van Riel,
	Arnd Bergmann, Konrad Rzeszutek Wilk, Borislav Petkov,
	Andy Lutomirski, Thomas Gleixner, Dmitry Vyukov, Boris Ostrovsky,
	kexec, xen-devel, iommu, Paolo Bonzini

This patch series provides support for AMD's new Secure Memory Encryption (SME)
feature.

SME can be used to mark individual pages of memory as encrypted through the
page tables. A page of memory that is marked encrypted will be automatically
decrypted when read from DRAM and will be automatically encrypted when
written to DRAM. Details on SME can found in the links below.

The SME feature is identified through a CPUID function and enabled through
the SYSCFG MSR. Once enabled, page table entries will determine how the
memory is accessed. If a page table entry has the memory encryption mask set,
then that memory will be accessed as encrypted memory. The memory encryption
mask (as well as other related information) is determined from settings
returned through the same CPUID function that identifies the presence of the
feature.

The approach that this patch series takes is to encrypt everything possible
starting early in the boot where the kernel is encrypted. Using the page
table macros the encryption mask can be incorporated into all page table
entries and page allocations. By updating the protection map, userspace
allocations are also marked encrypted. Certain data must be accounted for
as having been placed in memory before SME was enabled (EFI, initrd, etc.)
and accessed accordingly.

This patch series is a pre-cursor to another AMD processor feature called
Secure Encrypted Virtualization (SEV). The support for SEV will build upon
the SME support and will be submitted later. Details on SEV can be found
in the links below.

The following links provide additional detail:

AMD Memory Encryption whitepaper:
   http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf

AMD64 Architecture Programmer's Manual:
   http://support.amd.com/TechDocs/24593.pdf
   SME is section 7.10
   SEV is section 15.34

---

This patch series is based off of the master branch of tip:
  https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master

  Commit 5fcfb42b132c ("Merge branch 'linus'")

Source code is also available at https://github.com/codomania/tip/tree/sme-v10

Cc: <iommu@lists.linux-foundation.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: <kexec@lists.infradead.org>
Cc: <xen-devel@lists.xen.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>

Still to do:
- Kdump support, including using memremap() instead of ioremap_cache()

Changes since v9:
- Cleared SME feature capability for 32-bit builds
- Added a WARNing to the iounmap() path for ISA ranges to catch callers
  which did not use ioremap()

Changes since v8:
- Changed AMD IOMMU SME-related function name
- Updated the sme_encrypt_kernel() entry/exit code to address new warnings
  issued by objtool

Changes since v7:
- Fixed kbuild test robot failure related to pgprot_decrypted() macro
  usage for some non-x86 archs
- Moved calls to encrypt the kernel and retrieve the encryption mask
  from assembler (head_64.S) into C (head64.c)
- Removed use of phys_to_virt() in __ioremap_caller() when address is in
  the ISA range. Now regular ioremap() processing occurs.
- Two new, small patches:
  - Introduced a native_make_p4d() for use when CONFIG_PGTABLE_LEVELS is
    not greater than 4
  - Introduced __nostackp GCC option to turn off stack protection on a
    per function basis
- General code cleanup based on feedback

Changes since v6:
- Fixed the asm include file issue that caused build errors on other archs
- Rebased the CR3 register changes on top of Andy Lutomirski's patch
- Added a patch to clear the SME cpu feature if running as a PV guest under
  Xen
- Added a patch to obtain the AMD microcode level earlier in the boot
  instead of directly reading the MSR
- Refactor patch #8 ("x86/mm: Add support to enable SME in early boot
  processing") because the 5-level paging support moved the code into the
  new C-function __startup_64()
- Removed need to decrypt trampoline area in-place (set memory attributes
  before copying the trampoline code)
- General code cleanup based on feedback

Changes since v5:
- Added support for 5-level paging
- Added IOMMU support
- Created a generic asm/mem_encrypt.h in order to remove a bunch of
  #ifndef/#define entries
- Removed changes to the __va() macro and defined a function to return
  the true physical address in cr3
- Removed sysfs support as it was determined not to be needed
- General code cleanup based on feedback
- General cleanup of patch subjects and descriptions

Changes since v4:
- Re-worked mapping of setup data to not use a fixed list. Rather, check
  dynamically whether the requested early_memremap()/memremap() call
  needs to be mapped decrypted.
- Moved SME cpu feature into scattered features
- Moved some declarations into header files
- Cleared the encryption mask from the __PHYSICAL_MASK so that users
  of macros such as pmd_pfn_mask() don't have to worry/know about the
  encryption mask
- Updated some return types and values related to EFI and e820 functions
  so that an error could be returned
- During cpu shutdown, removed cache disabling and added a check for kexec
  in progress to use wbinvd followed immediately by halt in order to avoid
  any memory corruption
- Update how persistent memory is identified
- Added a function to find command line arguments and their values
- Added sysfs support
- General code cleanup based on feedback
- General cleanup of patch subjects and descriptions


Changes since v3:
- Broke out some of the patches into smaller individual patches
- Updated Documentation
- Added a message to indicate why the IOMMU was disabled
- Updated CPU feature support for SME by taking into account whether
  BIOS has enabled SME
- Eliminated redundant functions
- Added some warning messages for DMA usage of bounce buffers when SME
  is active
- Added support for persistent memory
- Added support to determine when setup data is being mapped and be sure
  to map it un-encrypted
- Added CONFIG support to set the default action of whether to activate
  SME if it is supported/enabled
- Added support for (re)booting with kexec

Changes since v2:
- Updated Documentation
- Make the encryption mask available outside of arch/x86 through a
  standard include file
- Conversion of assembler routines to C where possible (not everything
  could be converted, e.g. the routine that does the actual encryption
  needs to be copied into a safe location and it is difficult to
  determine the actual length of the function in order to copy it)
- Fix SME feature use of scattered CPUID feature
- Creation of SME specific functions for things like encrypting
  the setup data, ramdisk, etc.
- New take on early_memremap / memremap encryption support
- Additional support for accessing video buffers (fbdev/gpu) as
  un-encrypted
- Disable IOMMU for now - need to investigate further in relation to
  how it needs to be programmed relative to accessing physical memory

Changes since v1:
- Added Documentation.
- Removed AMD vendor check for setting the PAT write protect mode
- Updated naming of trampoline flag for SME as well as moving of the
  SME check to before paging is enabled.
- Change to early_memremap to identify the data being mapped as either
  boot data or kernel data.  The idea being that boot data will have
  been placed in memory as un-encrypted data and would need to be accessed
  as such.
- Updated debugfs support for the bootparams to access the data properly.
- Do not set the SYSCFG[MEME] bit, only check it.  The setting of the
  MemEncryptionModeEn bit results in a reduction of physical address size
  of the processor.  It is possible that BIOS could have configured resources
  resources into a range that will now not be addressable.  To prevent this,
  rely on BIOS to set the SYSCFG[MEME] bit and only then enable memory
  encryption support in the kernel.

Tom Lendacky (38):
  x86: Document AMD Secure Memory Encryption (SME)
  x86/mm/pat: Set write-protect cache mode for full PAT support
  x86, mpparse, x86/acpi, x86/PCI, x86/dmi, SFI: Use memremap for RAM
    mappings
  x86/CPU/AMD: Add the Secure Memory Encryption CPU feature
  x86/CPU/AMD: Handle SME reduction in physical address size
  x86/mm: Add Secure Memory Encryption (SME) support
  x86/mm: Remove phys_to_virt() usage in ioremap()
  x86/mm: Add support to enable SME in early boot processing
  x86/mm: Simplify p[g4um]d_page() macros
  x86/mm: Provide general kernel support for memory encryption
  x86/mm: Add SME support for read_cr3_pa()
  x86/mm: Extend early_memremap() support with additional attrs
  x86/mm: Add support for early encrypt/decrypt of memory
  x86/mm: Insure that boot memory areas are mapped properly
  x86/boot/e820: Add support to determine the E820 type of an address
  efi: Add an EFI table address match function
  efi: Update efi_mem_type() to return an error rather than 0
  x86/efi: Update EFI pagetable creation to work with SME
  x86/mm: Add support to access boot related data in the clear
  x86, mpparse: Use memremap to map the mpf and mpc data
  x86/mm: Add support to access persistent memory in the clear
  x86/mm: Add support for changing the memory encryption attribute
  x86/realmode: Decrypt trampoline area if memory encryption is active
  x86, swiotlb: Add memory encryption support
  swiotlb: Add warnings for use of bounce buffers with SME
  x86/CPU/AMD: Make the microcode level available earlier in the boot
  iommu/amd: Allow the AMD IOMMU to work with memory encryption
  x86, realmode: Check for memory encryption on the APs
  x86, drm, fbdev: Do not specify encrypted memory for video mappings
  kvm: x86: svm: Support Secure Memory Encryption within KVM
  x86/mm, kexec: Allow kexec to be used with SME
  xen/x86: Remove SME feature in PV guests
  x86/mm: Use proper encryption attributes with /dev/mem
  x86/mm: Create native_make_p4d() for PGTABLE_LEVELS <= 4
  x86/mm: Add support to encrypt the kernel in-place
  x86/boot: Add early cmdline parsing for options with arguments
  compiler-gcc.h: Introduce __nostackp function attribute
  x86/mm: Add support to make use of Secure Memory Encryption

 Documentation/admin-guide/kernel-parameters.txt |  11 +
 Documentation/x86/amd-memory-encryption.txt     |  68 +++
 arch/ia64/kernel/efi.c                          |   4 +-
 arch/x86/Kconfig                                |  29 ++
 arch/x86/boot/compressed/pagetable.c            |   7 +
 arch/x86/include/asm/cmdline.h                  |   2 +
 arch/x86/include/asm/cpufeatures.h              |   1 +
 arch/x86/include/asm/dma-mapping.h              |   5 +-
 arch/x86/include/asm/dmi.h                      |   8 +-
 arch/x86/include/asm/e820/api.h                 |   2 +
 arch/x86/include/asm/fixmap.h                   |  20 +
 arch/x86/include/asm/init.h                     |   1 +
 arch/x86/include/asm/io.h                       |   8 +
 arch/x86/include/asm/kexec.h                    |   8 +
 arch/x86/include/asm/kvm_host.h                 |   2 +-
 arch/x86/include/asm/mem_encrypt.h              |  80 ++++
 arch/x86/include/asm/msr-index.h                |   2 +
 arch/x86/include/asm/page_types.h               |   3 +-
 arch/x86/include/asm/pgtable.h                  |  28 +-
 arch/x86/include/asm/pgtable_types.h            |  57 ++-
 arch/x86/include/asm/processor-flags.h          |   5 +-
 arch/x86/include/asm/processor.h                |   8 +-
 arch/x86/include/asm/realmode.h                 |  12 +
 arch/x86/include/asm/set_memory.h               |   3 +
 arch/x86/include/asm/vga.h                      |  14 +-
 arch/x86/kernel/acpi/boot.c                     |   6 +-
 arch/x86/kernel/cpu/amd.c                       |  29 +-
 arch/x86/kernel/cpu/scattered.c                 |   1 +
 arch/x86/kernel/e820.c                          |  26 +-
 arch/x86/kernel/espfix_64.c                     |   2 +-
 arch/x86/kernel/head64.c                        |  93 +++-
 arch/x86/kernel/head_64.S                       |  40 +-
 arch/x86/kernel/kdebugfs.c                      |  34 +-
 arch/x86/kernel/ksysfs.c                        |  28 +-
 arch/x86/kernel/machine_kexec_64.c              |  22 +-
 arch/x86/kernel/mpparse.c                       | 108 +++--
 arch/x86/kernel/pci-dma.c                       |  11 +-
 arch/x86/kernel/pci-nommu.c                     |   2 +-
 arch/x86/kernel/pci-swiotlb.c                   |  15 +-
 arch/x86/kernel/process.c                       |  17 +-
 arch/x86/kernel/setup.c                         |   9 +
 arch/x86/kvm/mmu.c                              |  11 +-
 arch/x86/kvm/mmu.h                              |   2 +-
 arch/x86/kvm/svm.c                              |  35 +-
 arch/x86/kvm/vmx.c                              |   2 +-
 arch/x86/kvm/x86.c                              |   3 +-
 arch/x86/lib/cmdline.c                          | 105 +++++
 arch/x86/mm/Makefile                            |   2 +
 arch/x86/mm/ident_map.c                         |  12 +-
 arch/x86/mm/ioremap.c                           | 287 +++++++++++-
 arch/x86/mm/kasan_init_64.c                     |   6 +-
 arch/x86/mm/mem_encrypt.c                       | 593 ++++++++++++++++++++++++
 arch/x86/mm/mem_encrypt_boot.S                  | 149 ++++++
 arch/x86/mm/pageattr.c                          |  67 +++
 arch/x86/mm/pat.c                               |   9 +-
 arch/x86/mm/tlb.c                               |   4 +-
 arch/x86/pci/common.c                           |   4 +-
 arch/x86/platform/efi/efi.c                     |   6 +-
 arch/x86/platform/efi/efi_64.c                  |  15 +-
 arch/x86/realmode/init.c                        |  12 +
 arch/x86/realmode/rm/trampoline_64.S            |  24 +
 arch/x86/xen/enlighten_pv.c                     |   1 +
 drivers/firmware/dmi-sysfs.c                    |   5 +-
 drivers/firmware/efi/efi.c                      |  33 ++
 drivers/firmware/pcdp.c                         |   4 +-
 drivers/gpu/drm/drm_gem.c                       |   2 +
 drivers/gpu/drm/drm_vm.c                        |   4 +
 drivers/gpu/drm/ttm/ttm_bo_vm.c                 |   7 +-
 drivers/gpu/drm/udl/udl_fb.c                    |   4 +
 drivers/iommu/amd_iommu.c                       |  30 +-
 drivers/iommu/amd_iommu_init.c                  |  34 +-
 drivers/iommu/amd_iommu_proto.h                 |  10 +
 drivers/iommu/amd_iommu_types.h                 |   2 +-
 drivers/sfi/sfi_core.c                          |  22 +-
 drivers/video/fbdev/core/fbmem.c                |  12 +
 include/asm-generic/early_ioremap.h             |   2 +
 include/asm-generic/pgtable.h                   |  12 +
 include/linux/compiler-gcc.h                    |   2 +
 include/linux/compiler.h                        |   4 +
 include/linux/dma-mapping.h                     |  13 +
 include/linux/efi.h                             |   9 +-
 include/linux/io.h                              |   2 +
 include/linux/kexec.h                           |   8 +
 include/linux/mem_encrypt.h                     |  48 ++
 include/linux/swiotlb.h                         |   1 +
 init/main.c                                     |  10 +
 kernel/kexec_core.c                             |  12 +-
 kernel/memremap.c                               |  20 +-
 lib/swiotlb.c                                   |  57 ++-
 mm/early_ioremap.c                              |  28 +-
 90 files changed, 2304 insertions(+), 273 deletions(-)
 create mode 100644 Documentation/x86/amd-memory-encryption.txt
 create mode 100644 arch/x86/include/asm/mem_encrypt.h
 create mode 100644 arch/x86/mm/mem_encrypt.c
 create mode 100644 arch/x86/mm/mem_encrypt_boot.S
 create mode 100644 include/linux/mem_encrypt.h

-- 
1.9.1


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v10 31/38] x86/mm, kexec: Allow kexec to be used with SME
  2017-07-17 21:09 [PATCH v10 00/38] x86: Secure Memory Encryption (AMD) Tom Lendacky
@ 2017-07-17 21:10 ` Tom Lendacky
  2017-07-18 10:58   ` [tip:x86/mm] " tip-bot for Tom Lendacky
  2017-07-18 12:03 ` [PATCH v10 00/38] x86: Secure Memory Encryption (AMD) Thomas Gleixner
  1 sibling, 1 reply; 5+ messages in thread
From: Tom Lendacky @ 2017-07-17 21:10 UTC (permalink / raw)
  To: x86, linux-kernel, linux-arch, linux-efi, linux-doc, linux-mm,
	kvm, kasan-dev
  Cc: Rik van Riel, Radim Krčmář, Toshimitsu Kani,
	Arnd Bergmann, Konrad Rzeszutek Wilk, Matt Fleming,
	Michael S. Tsirkin, Jonathan Corbet, kexec, Paolo Bonzini,
	Larry Woodman, Brijesh Singh, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, H. Peter Anvin, Andrey Ryabinin,
	Alexander Potapenko, Dave Young, Thomas Gleixner, Dmitry Vyukov

Provide support so that kexec can be used to boot a kernel when SME is
enabled.

Support is needed to allocate pages for kexec without encryption.  This
is needed in order to be able to reboot in the kernel in the same manner
as originally booted.

Additionally, when shutting down all of the CPUs we need to be sure to
flush the caches and then halt. This is needed when booting from a state
where SME was not active into a state where SME is active (or vice-versa).
Without these steps, it is possible for cache lines to exist for the same
physical location but tagged both with and without the encryption bit. This
can cause random memory corruption when caches are flushed depending on
which cacheline is written last.

Cc: <kexec@lists.infradead.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/init.h          |  1 +
 arch/x86/include/asm/kexec.h         |  8 ++++++++
 arch/x86/include/asm/pgtable_types.h |  1 +
 arch/x86/kernel/machine_kexec_64.c   | 22 +++++++++++++++++++++-
 arch/x86/kernel/process.c            | 17 +++++++++++++++--
 arch/x86/mm/ident_map.c              | 12 ++++++++----
 include/linux/kexec.h                |  8 ++++++++
 kernel/kexec_core.c                  | 12 +++++++++++-
 8 files changed, 73 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h
index 474eb8c..05c4aa0 100644
--- a/arch/x86/include/asm/init.h
+++ b/arch/x86/include/asm/init.h
@@ -7,6 +7,7 @@ struct x86_mapping_info {
 	unsigned long page_flag;	 /* page flag for PMD or PUD entry */
 	unsigned long offset;		 /* ident mapping offset */
 	bool direct_gbpages;		 /* PUD level 1GB page support */
+	unsigned long kernpg_flag;	 /* kernel pagetable flag override */
 };
 
 int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 70ef205..e8183ac 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -207,6 +207,14 @@ struct kexec_entry64_regs {
 	uint64_t r15;
 	uint64_t rip;
 };
+
+extern int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages,
+				       gfp_t gfp);
+#define arch_kexec_post_alloc_pages arch_kexec_post_alloc_pages
+
+extern void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages);
+#define arch_kexec_pre_free_pages arch_kexec_pre_free_pages
+
 #endif
 
 typedef void crash_vmclear_fn(void);
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 32095af..830992f 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -213,6 +213,7 @@ enum page_cache_mode {
 #define PAGE_KERNEL		__pgprot(__PAGE_KERNEL | _PAGE_ENC)
 #define PAGE_KERNEL_RO		__pgprot(__PAGE_KERNEL_RO | _PAGE_ENC)
 #define PAGE_KERNEL_EXEC	__pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC)
+#define PAGE_KERNEL_EXEC_NOENC	__pgprot(__PAGE_KERNEL_EXEC)
 #define PAGE_KERNEL_RX		__pgprot(__PAGE_KERNEL_RX | _PAGE_ENC)
 #define PAGE_KERNEL_NOCACHE	__pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC)
 #define PAGE_KERNEL_LARGE	__pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC)
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index cb0a304..9cf8daa 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -87,7 +87,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
 		set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE));
 	}
 	pte = pte_offset_kernel(pmd, vaddr);
-	set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, PAGE_KERNEL_EXEC));
+	set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, PAGE_KERNEL_EXEC_NOENC));
 	return 0;
 err:
 	free_transition_pgtable(image);
@@ -115,6 +115,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
 		.alloc_pgt_page	= alloc_pgt_page,
 		.context	= image,
 		.page_flag	= __PAGE_KERNEL_LARGE_EXEC,
+		.kernpg_flag	= _KERNPG_TABLE_NOENC,
 	};
 	unsigned long mstart, mend;
 	pgd_t *level4p;
@@ -602,3 +603,22 @@ void arch_kexec_unprotect_crashkres(void)
 {
 	kexec_mark_crashkres(false);
 }
+
+int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
+{
+	/*
+	 * If SME is active we need to be sure that kexec pages are
+	 * not encrypted because when we boot to the new kernel the
+	 * pages won't be accessed encrypted (initially).
+	 */
+	return set_memory_decrypted((unsigned long)vaddr, pages);
+}
+
+void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages)
+{
+	/*
+	 * If SME is active we need to reset the pages back to being
+	 * an encrypted mapping before freeing them.
+	 */
+	set_memory_encrypted((unsigned long)vaddr, pages);
+}
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 3ca1980..bd6b85f 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -355,6 +355,7 @@ bool xen_set_default_idle(void)
 	return ret;
 }
 #endif
+
 void stop_this_cpu(void *dummy)
 {
 	local_irq_disable();
@@ -365,8 +366,20 @@ void stop_this_cpu(void *dummy)
 	disable_local_APIC();
 	mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
 
-	for (;;)
-		halt();
+	for (;;) {
+		/*
+		 * Use wbinvd followed by hlt to stop the processor. This
+		 * provides support for kexec on a processor that supports
+		 * SME. With kexec, going from SME inactive to SME active
+		 * requires clearing cache entries so that addresses without
+		 * the encryption bit set don't corrupt the same physical
+		 * address that has the encryption bit set when caches are
+		 * flushed. To achieve this a wbinvd is performed followed by
+		 * a hlt. Even if the processor is not in the kexec/SME
+		 * scenario this only adds a wbinvd to a halting processor.
+		 */
+		asm volatile("wbinvd; hlt" : : : "memory");
+	}
 }
 
 /*
diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
index adab159..31cea98 100644
--- a/arch/x86/mm/ident_map.c
+++ b/arch/x86/mm/ident_map.c
@@ -51,7 +51,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
 		if (!pmd)
 			return -ENOMEM;
 		ident_pmd_init(info, pmd, addr, next);
-		set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
+		set_pud(pud, __pud(__pa(pmd) | info->kernpg_flag));
 	}
 
 	return 0;
@@ -79,7 +79,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
 		if (!pud)
 			return -ENOMEM;
 		ident_pud_init(info, pud, addr, next);
-		set_p4d(p4d, __p4d(__pa(pud) | _KERNPG_TABLE));
+		set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag));
 	}
 
 	return 0;
@@ -93,6 +93,10 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
 	unsigned long next;
 	int result;
 
+	/* Set the default pagetable flags if not supplied */
+	if (!info->kernpg_flag)
+		info->kernpg_flag = _KERNPG_TABLE;
+
 	for (; addr < end; addr = next) {
 		pgd_t *pgd = pgd_page + pgd_index(addr);
 		p4d_t *p4d;
@@ -116,14 +120,14 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
 		if (result)
 			return result;
 		if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-			set_pgd(pgd, __pgd(__pa(p4d) | _KERNPG_TABLE));
+			set_pgd(pgd, __pgd(__pa(p4d) | info->kernpg_flag));
 		} else {
 			/*
 			 * With p4d folded, pgd is equal to p4d.
 			 * The pgd entry has to point to the pud page table in this case.
 			 */
 			pud_t *pud = pud_offset(p4d, 0);
-			set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));
+			set_pgd(pgd, __pgd(__pa(pud) | info->kernpg_flag));
 		}
 	}
 
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index dd056fa..2b7590f 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -327,6 +327,14 @@ static inline void *boot_phys_to_virt(unsigned long entry)
 	return phys_to_virt(boot_phys_to_phys(entry));
 }
 
+#ifndef arch_kexec_post_alloc_pages
+static inline int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp) { return 0; }
+#endif
+
+#ifndef arch_kexec_pre_free_pages
+static inline void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages) { }
+#endif
+
 #else /* !CONFIG_KEXEC_CORE */
 struct pt_regs;
 struct task_struct;
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 1ae7c41..20fef1a 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -301,7 +301,7 @@ static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order)
 {
 	struct page *pages;
 
-	pages = alloc_pages(gfp_mask, order);
+	pages = alloc_pages(gfp_mask & ~__GFP_ZERO, order);
 	if (pages) {
 		unsigned int count, i;
 
@@ -310,6 +310,13 @@ static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order)
 		count = 1 << order;
 		for (i = 0; i < count; i++)
 			SetPageReserved(pages + i);
+
+		arch_kexec_post_alloc_pages(page_address(pages), count,
+					    gfp_mask);
+
+		if (gfp_mask & __GFP_ZERO)
+			for (i = 0; i < count; i++)
+				clear_highpage(pages + i);
 	}
 
 	return pages;
@@ -321,6 +328,9 @@ static void kimage_free_pages(struct page *page)
 
 	order = page_private(page);
 	count = 1 << order;
+
+	arch_kexec_pre_free_pages(page_address(page), count);
+
 	for (i = 0; i < count; i++)
 		ClearPageReserved(page + i);
 	__free_pages(page, order);
-- 
1.9.1


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [tip:x86/mm] x86/mm, kexec: Allow kexec to be used with SME
  2017-07-17 21:10 ` [PATCH v10 31/38] x86/mm, kexec: Allow kexec to be used with SME Tom Lendacky
@ 2017-07-18 10:58   ` tip-bot for Tom Lendacky
  0 siblings, 0 replies; 5+ messages in thread
From: tip-bot for Tom Lendacky @ 2017-07-18 10:58 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: brijesh.singh, toshi.kani, rkrcmar, matt, glider, hpa, mingo,
	corbet, mst, lwoodman, peterz, aryabinin, bp, dyoung,
	thomas.lendacky, riel, arnd, konrad.wilk, bp, luto, tglx, dvyukov,
	kexec, linux-kernel, pbonzini, torvalds

Commit-ID:  bba4ed011a52d494aa7ef5e08cf226709bbf3f60
Gitweb:     http://git.kernel.org/tip/bba4ed011a52d494aa7ef5e08cf226709bbf3f60
Author:     Tom Lendacky <thomas.lendacky@amd.com>
AuthorDate: Mon, 17 Jul 2017 16:10:28 -0500
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 18 Jul 2017 11:38:04 +0200

x86/mm, kexec: Allow kexec to be used with SME

Provide support so that kexec can be used to boot a kernel when SME is
enabled.

Support is needed to allocate pages for kexec without encryption.  This
is needed in order to be able to reboot in the kernel in the same manner
as originally booted.

Additionally, when shutting down all of the CPUs we need to be sure to
flush the caches and then halt. This is needed when booting from a state
where SME was not active into a state where SME is active (or vice-versa).
Without these steps, it is possible for cache lines to exist for the same
physical location but tagged both with and without the encryption bit. This
can cause random memory corruption when caches are flushed depending on
which cacheline is written last.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@suse.de>
Cc: <kexec@lists.infradead.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Toshimitsu Kani <toshi.kani@hpe.com>
Cc: kasan-dev@googlegroups.com
Cc: kvm@vger.kernel.org
Cc: linux-arch@vger.kernel.org
Cc: linux-doc@vger.kernel.org
Cc: linux-efi@vger.kernel.org
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/b95ff075db3e7cd545313f2fb609a49619a09625.1500319216.git.thomas.lendacky@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/init.h          |  1 +
 arch/x86/include/asm/kexec.h         |  8 ++++++++
 arch/x86/include/asm/pgtable_types.h |  1 +
 arch/x86/kernel/machine_kexec_64.c   | 22 +++++++++++++++++++++-
 arch/x86/kernel/process.c            | 17 +++++++++++++++--
 arch/x86/mm/ident_map.c              | 12 ++++++++----
 include/linux/kexec.h                |  8 ++++++++
 kernel/kexec_core.c                  | 12 +++++++++++-
 8 files changed, 73 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h
index 474eb8c..05c4aa0 100644
--- a/arch/x86/include/asm/init.h
+++ b/arch/x86/include/asm/init.h
@@ -7,6 +7,7 @@ struct x86_mapping_info {
 	unsigned long page_flag;	 /* page flag for PMD or PUD entry */
 	unsigned long offset;		 /* ident mapping offset */
 	bool direct_gbpages;		 /* PUD level 1GB page support */
+	unsigned long kernpg_flag;	 /* kernel pagetable flag override */
 };
 
 int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 70ef205..e8183ac 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -207,6 +207,14 @@ struct kexec_entry64_regs {
 	uint64_t r15;
 	uint64_t rip;
 };
+
+extern int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages,
+				       gfp_t gfp);
+#define arch_kexec_post_alloc_pages arch_kexec_post_alloc_pages
+
+extern void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages);
+#define arch_kexec_pre_free_pages arch_kexec_pre_free_pages
+
 #endif
 
 typedef void crash_vmclear_fn(void);
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 32095af..830992f 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -213,6 +213,7 @@ enum page_cache_mode {
 #define PAGE_KERNEL		__pgprot(__PAGE_KERNEL | _PAGE_ENC)
 #define PAGE_KERNEL_RO		__pgprot(__PAGE_KERNEL_RO | _PAGE_ENC)
 #define PAGE_KERNEL_EXEC	__pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC)
+#define PAGE_KERNEL_EXEC_NOENC	__pgprot(__PAGE_KERNEL_EXEC)
 #define PAGE_KERNEL_RX		__pgprot(__PAGE_KERNEL_RX | _PAGE_ENC)
 #define PAGE_KERNEL_NOCACHE	__pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC)
 #define PAGE_KERNEL_LARGE	__pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC)
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index cb0a304..9cf8daa 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -87,7 +87,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
 		set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE));
 	}
 	pte = pte_offset_kernel(pmd, vaddr);
-	set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, PAGE_KERNEL_EXEC));
+	set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, PAGE_KERNEL_EXEC_NOENC));
 	return 0;
 err:
 	free_transition_pgtable(image);
@@ -115,6 +115,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
 		.alloc_pgt_page	= alloc_pgt_page,
 		.context	= image,
 		.page_flag	= __PAGE_KERNEL_LARGE_EXEC,
+		.kernpg_flag	= _KERNPG_TABLE_NOENC,
 	};
 	unsigned long mstart, mend;
 	pgd_t *level4p;
@@ -602,3 +603,22 @@ void arch_kexec_unprotect_crashkres(void)
 {
 	kexec_mark_crashkres(false);
 }
+
+int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
+{
+	/*
+	 * If SME is active we need to be sure that kexec pages are
+	 * not encrypted because when we boot to the new kernel the
+	 * pages won't be accessed encrypted (initially).
+	 */
+	return set_memory_decrypted((unsigned long)vaddr, pages);
+}
+
+void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages)
+{
+	/*
+	 * If SME is active we need to reset the pages back to being
+	 * an encrypted mapping before freeing them.
+	 */
+	set_memory_encrypted((unsigned long)vaddr, pages);
+}
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 3ca1980..bd6b85f 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -355,6 +355,7 @@ bool xen_set_default_idle(void)
 	return ret;
 }
 #endif
+
 void stop_this_cpu(void *dummy)
 {
 	local_irq_disable();
@@ -365,8 +366,20 @@ void stop_this_cpu(void *dummy)
 	disable_local_APIC();
 	mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
 
-	for (;;)
-		halt();
+	for (;;) {
+		/*
+		 * Use wbinvd followed by hlt to stop the processor. This
+		 * provides support for kexec on a processor that supports
+		 * SME. With kexec, going from SME inactive to SME active
+		 * requires clearing cache entries so that addresses without
+		 * the encryption bit set don't corrupt the same physical
+		 * address that has the encryption bit set when caches are
+		 * flushed. To achieve this a wbinvd is performed followed by
+		 * a hlt. Even if the processor is not in the kexec/SME
+		 * scenario this only adds a wbinvd to a halting processor.
+		 */
+		asm volatile("wbinvd; hlt" : : : "memory");
+	}
 }
 
 /*
diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
index adab159..31cea98 100644
--- a/arch/x86/mm/ident_map.c
+++ b/arch/x86/mm/ident_map.c
@@ -51,7 +51,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
 		if (!pmd)
 			return -ENOMEM;
 		ident_pmd_init(info, pmd, addr, next);
-		set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
+		set_pud(pud, __pud(__pa(pmd) | info->kernpg_flag));
 	}
 
 	return 0;
@@ -79,7 +79,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
 		if (!pud)
 			return -ENOMEM;
 		ident_pud_init(info, pud, addr, next);
-		set_p4d(p4d, __p4d(__pa(pud) | _KERNPG_TABLE));
+		set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag));
 	}
 
 	return 0;
@@ -93,6 +93,10 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
 	unsigned long next;
 	int result;
 
+	/* Set the default pagetable flags if not supplied */
+	if (!info->kernpg_flag)
+		info->kernpg_flag = _KERNPG_TABLE;
+
 	for (; addr < end; addr = next) {
 		pgd_t *pgd = pgd_page + pgd_index(addr);
 		p4d_t *p4d;
@@ -116,14 +120,14 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
 		if (result)
 			return result;
 		if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
-			set_pgd(pgd, __pgd(__pa(p4d) | _KERNPG_TABLE));
+			set_pgd(pgd, __pgd(__pa(p4d) | info->kernpg_flag));
 		} else {
 			/*
 			 * With p4d folded, pgd is equal to p4d.
 			 * The pgd entry has to point to the pud page table in this case.
 			 */
 			pud_t *pud = pud_offset(p4d, 0);
-			set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));
+			set_pgd(pgd, __pgd(__pa(pud) | info->kernpg_flag));
 		}
 	}
 
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index dd056fa..2b7590f 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -327,6 +327,14 @@ static inline void *boot_phys_to_virt(unsigned long entry)
 	return phys_to_virt(boot_phys_to_phys(entry));
 }
 
+#ifndef arch_kexec_post_alloc_pages
+static inline int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp) { return 0; }
+#endif
+
+#ifndef arch_kexec_pre_free_pages
+static inline void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages) { }
+#endif
+
 #else /* !CONFIG_KEXEC_CORE */
 struct pt_regs;
 struct task_struct;
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 1ae7c41..20fef1a 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -301,7 +301,7 @@ static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order)
 {
 	struct page *pages;
 
-	pages = alloc_pages(gfp_mask, order);
+	pages = alloc_pages(gfp_mask & ~__GFP_ZERO, order);
 	if (pages) {
 		unsigned int count, i;
 
@@ -310,6 +310,13 @@ static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order)
 		count = 1 << order;
 		for (i = 0; i < count; i++)
 			SetPageReserved(pages + i);
+
+		arch_kexec_post_alloc_pages(page_address(pages), count,
+					    gfp_mask);
+
+		if (gfp_mask & __GFP_ZERO)
+			for (i = 0; i < count; i++)
+				clear_highpage(pages + i);
 	}
 
 	return pages;
@@ -321,6 +328,9 @@ static void kimage_free_pages(struct page *page)
 
 	order = page_private(page);
 	count = 1 << order;
+
+	arch_kexec_pre_free_pages(page_address(page), count);
+
 	for (i = 0; i < count; i++)
 		ClearPageReserved(page + i);
 	__free_pages(page, order);

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v10 00/38] x86: Secure Memory Encryption (AMD)
  2017-07-17 21:09 [PATCH v10 00/38] x86: Secure Memory Encryption (AMD) Tom Lendacky
  2017-07-17 21:10 ` [PATCH v10 31/38] x86/mm, kexec: Allow kexec to be used with SME Tom Lendacky
@ 2017-07-18 12:03 ` Thomas Gleixner
  2017-07-18 14:02   ` Tom Lendacky
  1 sibling, 1 reply; 5+ messages in thread
From: Thomas Gleixner @ 2017-07-18 12:03 UTC (permalink / raw)
  To: Tom Lendacky
  Cc: linux-efi, Brijesh Singh, kvm, Radim Krčmář,
	Matt Fleming, x86, linux-mm, Alexander Potapenko, H. Peter Anvin,
	Larry Woodman, linux-arch, Toshimitsu Kani, Jonathan Corbet,
	Joerg Roedel, linux-doc, kasan-dev, Ingo Molnar, Andrey Ryabinin,
	Dave Young, Rik van Riel, Arnd Bergmann, Konrad Rzeszutek Wilk,
	Borislav Petkov, Andy Lutomirski, Boris Ostrovsky, Dmitry Vyukov,
	Juergen Gross, kexec, linux-kernel, xen-devel, iommu,
	Michael S. Tsirkin, Paolo Bonzini

On Mon, 17 Jul 2017, Tom Lendacky wrote:
> This patch series provides support for AMD's new Secure Memory Encryption (SME)
> feature.
> 
> SME can be used to mark individual pages of memory as encrypted through the
> page tables. A page of memory that is marked encrypted will be automatically
> decrypted when read from DRAM and will be automatically encrypted when
> written to DRAM. Details on SME can found in the links below.
> 
> The SME feature is identified through a CPUID function and enabled through
> the SYSCFG MSR. Once enabled, page table entries will determine how the
> memory is accessed. If a page table entry has the memory encryption mask set,
> then that memory will be accessed as encrypted memory. The memory encryption
> mask (as well as other related information) is determined from settings
> returned through the same CPUID function that identifies the presence of the
> feature.
> 
> The approach that this patch series takes is to encrypt everything possible
> starting early in the boot where the kernel is encrypted. Using the page
> table macros the encryption mask can be incorporated into all page table
> entries and page allocations. By updating the protection map, userspace
> allocations are also marked encrypted. Certain data must be accounted for
> as having been placed in memory before SME was enabled (EFI, initrd, etc.)
> and accessed accordingly.
> 
> This patch series is a pre-cursor to another AMD processor feature called
> Secure Encrypted Virtualization (SEV). The support for SEV will build upon
> the SME support and will be submitted later. Details on SEV can be found
> in the links below.

Well done series. Thanks to all people involved, especially Tom and Boris!
It was a pleasure to review that.

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v10 00/38] x86: Secure Memory Encryption (AMD)
  2017-07-18 12:03 ` [PATCH v10 00/38] x86: Secure Memory Encryption (AMD) Thomas Gleixner
@ 2017-07-18 14:02   ` Tom Lendacky
  0 siblings, 0 replies; 5+ messages in thread
From: Tom Lendacky @ 2017-07-18 14:02 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: linux-efi, Brijesh Singh, kvm, Radim Krčmář,
	Matt Fleming, x86, linux-mm, Alexander Potapenko, H. Peter Anvin,
	Larry Woodman, linux-arch, Toshimitsu Kani, Jonathan Corbet,
	Joerg Roedel, linux-doc, kasan-dev, Ingo Molnar, Andrey Ryabinin,
	Dave Young, Rik van Riel, Arnd Bergmann, Konrad Rzeszutek Wilk,
	Borislav Petkov, Andy Lutomirski, Boris Ostrovsky, Dmitry Vyukov,
	Juergen Gross, kexec, linux-kernel, xen-devel, iommu,
	Michael S. Tsirkin, Paolo Bonzini

On 7/18/2017 7:03 AM, Thomas Gleixner wrote:
> On Mon, 17 Jul 2017, Tom Lendacky wrote:
>> This patch series provides support for AMD's new Secure Memory Encryption (SME)
>> feature.
>>
>> SME can be used to mark individual pages of memory as encrypted through the
>> page tables. A page of memory that is marked encrypted will be automatically
>> decrypted when read from DRAM and will be automatically encrypted when
>> written to DRAM. Details on SME can found in the links below.
>>
>> The SME feature is identified through a CPUID function and enabled through
>> the SYSCFG MSR. Once enabled, page table entries will determine how the
>> memory is accessed. If a page table entry has the memory encryption mask set,
>> then that memory will be accessed as encrypted memory. The memory encryption
>> mask (as well as other related information) is determined from settings
>> returned through the same CPUID function that identifies the presence of the
>> feature.
>>
>> The approach that this patch series takes is to encrypt everything possible
>> starting early in the boot where the kernel is encrypted. Using the page
>> table macros the encryption mask can be incorporated into all page table
>> entries and page allocations. By updating the protection map, userspace
>> allocations are also marked encrypted. Certain data must be accounted for
>> as having been placed in memory before SME was enabled (EFI, initrd, etc.)
>> and accessed accordingly.
>>
>> This patch series is a pre-cursor to another AMD processor feature called
>> Secure Encrypted Virtualization (SEV). The support for SEV will build upon
>> the SME support and will be submitted later. Details on SEV can be found
>> in the links below.
> 
> Well done series. Thanks to all people involved, especially Tom and Boris!
> It was a pleasure to review that.
> 
> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>

A big thanks from me to everyone that helped review this.  I truly
appreciate all the time that everyone put into this - especially Boris,
who helped guide this series from the start.

Thanks,
Tom

> 

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-07-18 14:02 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-17 21:09 [PATCH v10 00/38] x86: Secure Memory Encryption (AMD) Tom Lendacky
2017-07-17 21:10 ` [PATCH v10 31/38] x86/mm, kexec: Allow kexec to be used with SME Tom Lendacky
2017-07-18 10:58   ` [tip:x86/mm] " tip-bot for Tom Lendacky
2017-07-18 12:03 ` [PATCH v10 00/38] x86: Secure Memory Encryption (AMD) Thomas Gleixner
2017-07-18 14:02   ` Tom Lendacky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox