From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-sn1nam01on0086.outbound.protection.outlook.com [104.47.32.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3xGWB65NxSzDrJx for ; Tue, 25 Jul 2017 05:08:46 +1000 (AEST) From: Brijesh Singh To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-efi@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org Cc: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Borislav Petkov , Andy Lutomirski , Tony Luck , Piotr Luc , Tom Lendacky , Fenghua Yu , Lu Baolu , Reza Arbab , David Howells , Matt Fleming , "Kirill A . Shutemov" , Laura Abbott , Ard Biesheuvel , Andrew Morton , Eric Biederman , Benjamin Herrenschmidt , Paul Mackerras , Konrad Rzeszutek Wilk , Jonathan Corbet , Dave Airlie , Kees Cook , Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Arnd Bergmann , Tejun Heo , Christoph Lameter , Brijesh Singh Subject: [RFC Part1 PATCH v3 11/17] x86/mm, resource: Use PAGE_KERNEL protection for ioremap of memory pages Date: Mon, 24 Jul 2017 14:07:51 -0500 Message-Id: <20170724190757.11278-12-brijesh.singh@amd.com> In-Reply-To: <20170724190757.11278-1-brijesh.singh@amd.com> References: <20170724190757.11278-1-brijesh.singh@amd.com> MIME-Version: 1.0 Content-Type: text/plain List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Tom Lendacky In order for memory pages to be properly mapped when SEV is active, we need to use the PAGE_KERNEL protection attribute as the base protection. This will insure that memory mapping of, e.g. ACPI tables, receives the proper mapping attributes. Signed-off-by: Tom Lendacky Signed-off-by: Brijesh Singh --- arch/x86/mm/ioremap.c | 28 ++++++++++++++++++++++++++++ include/linux/ioport.h | 3 +++ kernel/resource.c | 17 +++++++++++++++++ 3 files changed, 48 insertions(+) diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index c0be7cf..7b27332 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -69,6 +69,26 @@ static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages, return 0; } +static int __ioremap_res_desc_other(struct resource *res, void *arg) +{ + return (res->desc != IORES_DESC_NONE); +} + +/* + * This function returns true if the target memory is marked as + * IORESOURCE_MEM and IORESOURCE_BUSY and described as other than + * IORES_DESC_NONE (e.g. IORES_DESC_ACPI_TABLES). + */ +static bool __ioremap_check_if_mem(resource_size_t addr, unsigned long size) +{ + u64 start, end; + + start = (u64)addr; + end = start + size - 1; + + return (walk_mem_res(start, end, NULL, __ioremap_res_desc_other) == 1); +} + /* * Remap an arbitrary physical address space into the kernel virtual * address space. It transparently creates kernel huge I/O mapping when @@ -146,7 +166,15 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr, pcm = new_pcm; } + /* + * If the page being mapped is in memory and SEV is active then + * make sure the memory encryption attribute is enabled in the + * resulting mapping. + */ prot = PAGE_KERNEL_IO; + if (sev_active() && __ioremap_check_if_mem(phys_addr, size)) + prot = pgprot_encrypted(prot); + switch (pcm) { case _PAGE_CACHE_MODE_UC: default: diff --git a/include/linux/ioport.h b/include/linux/ioport.h index 1c66b9c..297f5b8 100644 --- a/include/linux/ioport.h +++ b/include/linux/ioport.h @@ -268,6 +268,9 @@ extern int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages, void *arg, int (*func)(unsigned long, unsigned long, void *)); extern int +walk_mem_res(u64 start, u64 end, void *arg, + int (*func)(struct resource *, void *)); +extern int walk_system_ram_res(u64 start, u64 end, void *arg, int (*func)(struct resource *, void *)); extern int diff --git a/kernel/resource.c b/kernel/resource.c index 5f9ee7bb0..ec3fa0c 100644 --- a/kernel/resource.c +++ b/kernel/resource.c @@ -468,6 +468,23 @@ int walk_system_ram_res(u64 start, u64 end, void *arg, arg, func); } +/* + * This function calls the @func callback against all memory ranges, which + * are ranges marked as IORESOURCE_MEM and IORESOUCE_BUSY. + */ +int walk_mem_res(u64 start, u64 end, void *arg, + int (*func)(struct resource *, void *)) +{ + struct resource res; + + res.start = start; + res.end = end; + res.flags = IORESOURCE_MEM | IORESOURCE_BUSY; + + return __walk_iomem_res_desc(&res, IORES_DESC_NONE, true, + arg, func); +} + #if !defined(CONFIG_ARCH_HAS_WALK_MEMORY) /* -- 2.9.4