From: Borislav Petkov <bp@alien8.de>
To: Michael Kelley <mikelley@microsoft.com>
Cc: hpa@zytor.com, kys@microsoft.com, haiyangz@microsoft.com,
wei.liu@kernel.org, decui@microsoft.com, luto@kernel.org,
peterz@infradead.org, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com, lpieralisi@kernel.org,
robh@kernel.org, kw@linux.com, bhelgaas@google.com,
arnd@arndb.de, hch@infradead.org, m.szyprowski@samsung.com,
robin.murphy@arm.com, thomas.lendacky@amd.com,
brijesh.singh@amd.com, tglx@linutronix.de, mingo@redhat.com,
dave.hansen@linux.intel.com, Tianyu.Lan@microsoft.com,
kirill.shutemov@linux.intel.com,
sathyanarayanan.kuppuswamy@linux.intel.com, ak@linux.intel.com,
isaku.yamahata@intel.com, dan.j.williams@intel.com,
jane.chu@oracle.com, seanjc@google.com, tony.luck@intel.com,
x86@kernel.org, linux-kernel@vger.kernel.org,
linux-hyperv@vger.kernel.org, netdev@vger.kernel.org,
linux-pci@vger.kernel.org, linux-arch@vger.kernel.org,
iommu@lists.linux.dev
Subject: Re: [Patch v4 06/13] x86/hyperv: Change vTOM handling to use standard coco mechanisms
Date: Mon, 9 Jan 2023 17:38:36 +0100 [thread overview]
Message-ID: <Y7xDDNMIDyHKLicG@zn.tnic> (raw)
In-Reply-To: <1669951831-4180-7-git-send-email-mikelley@microsoft.com>
On Thu, Dec 01, 2022 at 07:30:24PM -0800, Michael Kelley wrote:
> Hyper-V guests on AMD SEV-SNP hardware have the option of using the
> "virtual Top Of Memory" (vTOM) feature specified by the SEV-SNP
> architecture. With vTOM, shared vs. private memory accesses are
> controlled by splitting the guest physical address space into two
> halves. vTOM is the dividing line where the uppermost bit of the
> physical address space is set; e.g., with 47 bits of guest physical
> address space, vTOM is 0x400000000000 (bit 46 is set). Guest physical
> memory is accessible at two parallel physical addresses -- one below
> vTOM and one above vTOM. Accesses below vTOM are private (encrypted)
> while accesses above vTOM are shared (decrypted). In this sense, vTOM
> is like the GPA.SHARED bit in Intel TDX.
>
> Support for Hyper-V guests using vTOM was added to the Linux kernel in
> two patch sets[1][2]. This support treats the vTOM bit as part of
> the physical address. For accessing shared (decrypted) memory, these
> patch sets create a second kernel virtual mapping that maps to physical
> addresses above vTOM.
>
> A better approach is to treat the vTOM bit as a protection flag, not
> as part of the physical address. This new approach is like the approach
> for the GPA.SHARED bit in Intel TDX. Rather than creating a second kernel
> virtual mapping, the existing mapping is updated using recently added
> coco mechanisms. When memory is changed between private and shared using
> set_memory_decrypted() and set_memory_encrypted(), the PTEs for the
> existing kernel mapping are changed to add or remove the vTOM bit
> in the guest physical address, just as with TDX. The hypercalls to
> change the memory status on the host side are made using the existing
> callback mechanism. Everything just works, with a minor tweak to map
> the IO-APIC to use private accesses.
>
> To accomplish the switch in approach, the following must be done in
> this single patch:
s/in this single patch//
> * Update Hyper-V initialization to set the cc_mask based on vTOM
> and do other coco initialization.
>
> * Update physical_mask so the vTOM bit is no longer treated as part
> of the physical address
>
> * Remove CC_VENDOR_HYPERV and merge the associated vTOM functionality
> under CC_VENDOR_AMD. Update cc_mkenc() and cc_mkdec() to set/clear
> the vTOM bit as a protection flag.
>
> * Code already exists to make hypercalls to inform Hyper-V about pages
> changing between shared and private. Update this code to run as a
> callback from __set_memory_enc_pgtable().
>
> * Remove the Hyper-V special case from __set_memory_enc_dec()
>
> * Remove the Hyper-V specific call to swiotlb_update_mem_attributes()
> since mem_encrypt_init() will now do it.
>
> [1] https://lore.kernel.org/all/20211025122116.264793-1-ltykernel@gmail.com/
> [2] https://lore.kernel.org/all/20211213071407.314309-1-ltykernel@gmail.com/
>
> Signed-off-by: Michael Kelley <mikelley@microsoft.com>
> ---
> arch/x86/coco/core.c | 37 ++++++++++++++++++++--------
> arch/x86/hyperv/hv_init.c | 11 ---------
> arch/x86/hyperv/ivm.c | 52 +++++++++++++++++++++++++++++++---------
> arch/x86/include/asm/coco.h | 1 -
> arch/x86/include/asm/mshyperv.h | 8 ++-----
> arch/x86/include/asm/msr-index.h | 1 +
> arch/x86/kernel/cpu/mshyperv.c | 15 ++++++------
> arch/x86/mm/pat/set_memory.c | 3 ---
> 8 files changed, 78 insertions(+), 50 deletions(-)
>
> diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c
> index 49b44f8..c361c52 100644
> --- a/arch/x86/coco/core.c
> +++ b/arch/x86/coco/core.c
> @@ -44,6 +44,24 @@ static bool intel_cc_platform_has(enum cc_attr attr)
> static bool amd_cc_platform_has(enum cc_attr attr)
> {
> #ifdef CONFIG_AMD_MEM_ENCRYPT
> +
> + /*
> + * Handle the SEV-SNP vTOM case where sme_me_mask must be zero,
> + * and the other levels of SME/SEV functionality, including C-bit
> + * based SEV-SNP, must not be enabled.
> + */
> + if (sev_status & MSR_AMD64_SNP_VTOM_ENABLED) {
return amd_cc_platform_vtom();
or so and then stick that switch in there.
This way it looks kinda grafted in front and with a function call with a telling
name it says it is a special case...
> + switch (attr) {
> + case CC_ATTR_GUEST_MEM_ENCRYPT:
> + case CC_ATTR_MEM_ENCRYPT:
> + case CC_ATTR_ACCESS_IOAPIC_ENCRYPTED:
> + return true;
> + default:
> + return false;
> + }
> + }
The rest looks kinda nice, I gotta say. I can't complain. :)
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
next prev parent reply other threads:[~2023-01-09 16:38 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-02 3:30 [Patch v4 00/13] Add PCI pass-thru support to Hyper-V Confidential VMs Michael Kelley
2022-12-02 3:30 ` [Patch v4 01/13] x86/ioapic: Gate decrypted mapping on cc_platform_has() attribute Michael Kelley
2022-12-06 19:22 ` Borislav Petkov
2022-12-06 19:54 ` Michael Kelley (LINUX)
2022-12-29 11:39 ` Borislav Petkov
2022-12-29 16:02 ` Michael Kelley (LINUX)
2022-12-06 19:27 ` Sathyanarayanan Kuppuswamy
2022-12-02 3:30 ` [Patch v4 02/13] x86/hyperv: Reorder code in prep for subsequent patch Michael Kelley
2022-12-02 3:30 ` [Patch v4 03/13] Drivers: hv: Explicitly request decrypted in vmap_pfn() calls Michael Kelley
2022-12-29 12:05 ` Borislav Petkov
2022-12-02 3:30 ` [Patch v4 04/13] x86/mm: Handle decryption/re-encryption of bss_decrypted consistently Michael Kelley
2022-12-29 12:17 ` Borislav Petkov
2022-12-29 16:25 ` Michael Kelley (LINUX)
2023-01-09 19:10 ` Borislav Petkov
2023-01-09 19:14 ` Michael Kelley (LINUX)
2022-12-29 16:54 ` Bjorn Helgaas
2022-12-29 17:12 ` Borislav Petkov
2022-12-02 3:30 ` [Patch v4 05/13] init: Call mem_encrypt_init() after Hyper-V hypercall init is done Michael Kelley
2022-12-06 19:37 ` Sathyanarayanan Kuppuswamy
2022-12-06 20:13 ` Michael Kelley (LINUX)
2022-12-06 20:30 ` Sathyanarayanan Kuppuswamy
2022-12-02 3:30 ` [Patch v4 06/13] x86/hyperv: Change vTOM handling to use standard coco mechanisms Michael Kelley
2023-01-09 16:38 ` Borislav Petkov [this message]
2023-01-09 17:37 ` Michael Kelley (LINUX)
2023-01-09 18:07 ` Borislav Petkov
2022-12-02 3:30 ` [Patch v4 07/13] swiotlb: Remove bounce buffer remapping for Hyper-V Michael Kelley
2023-01-09 18:05 ` Borislav Petkov
2022-12-02 3:30 ` [Patch v4 08/13] Drivers: hv: vmbus: Remove second mapping of VMBus monitor pages Michael Kelley
2022-12-02 3:30 ` [Patch v4 09/13] Drivers: hv: vmbus: Remove second way of mapping ring buffers Michael Kelley
2022-12-02 3:30 ` [Patch v4 10/13] hv_netvsc: Remove second mapping of send and recv buffers Michael Kelley
2022-12-02 3:30 ` [Patch v4 11/13] Drivers: hv: Don't remap addresses that are above shared_gpa_boundary Michael Kelley
2022-12-02 3:30 ` [Patch v4 12/13] PCI: hv: Add hypercalls to read/write MMIO space Michael Kelley
2022-12-02 3:30 ` [Patch v4 13/13] PCI: hv: Enable PCI pass-thru devices in Confidential VMs Michael Kelley
2023-01-09 18:47 ` [Patch v4 00/13] Add PCI pass-thru support to Hyper-V " Borislav Petkov
2023-01-09 19:35 ` Michael Kelley (LINUX)
2023-01-12 14:03 ` Wei Liu
2023-01-19 17:58 ` Dexuan Cui
2023-01-20 11:58 ` Borislav Petkov
2023-01-20 12:42 ` Wei Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y7xDDNMIDyHKLicG@zn.tnic \
--to=bp@alien8.de \
--cc=Tianyu.Lan@microsoft.com \
--cc=ak@linux.intel.com \
--cc=arnd@arndb.de \
--cc=bhelgaas@google.com \
--cc=brijesh.singh@amd.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=decui@microsoft.com \
--cc=edumazet@google.com \
--cc=haiyangz@microsoft.com \
--cc=hch@infradead.org \
--cc=hpa@zytor.com \
--cc=iommu@lists.linux.dev \
--cc=isaku.yamahata@intel.com \
--cc=jane.chu@oracle.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kuba@kernel.org \
--cc=kw@linux.com \
--cc=kys@microsoft.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-hyperv@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lpieralisi@kernel.org \
--cc=luto@kernel.org \
--cc=m.szyprowski@samsung.com \
--cc=mikelley@microsoft.com \
--cc=mingo@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=peterz@infradead.org \
--cc=robh@kernel.org \
--cc=robin.murphy@arm.com \
--cc=sathyanarayanan.kuppuswamy@linux.intel.com \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=tony.luck@intel.com \
--cc=wei.liu@kernel.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox