From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:60118 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730831AbfFMIJK (ORCPT ); Thu, 13 Jun 2019 04:09:10 -0400 Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5D86pb8133531 for ; Thu, 13 Jun 2019 04:09:08 -0400 Received: from e06smtp01.uk.ibm.com (e06smtp01.uk.ibm.com [195.75.94.97]) by mx0a-001b2d01.pphosted.com with ESMTP id 2t3jch8u9e-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 13 Jun 2019 04:09:08 -0400 Received: from localhost by e06smtp01.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 13 Jun 2019 09:09:06 +0100 Reply-To: mimu@linux.ibm.com Subject: Re: [PATCH v5 1/8] s390/mm: force swiotlb for protected virtualization References: <20190612111236.99538-1-pasic@linux.ibm.com> <20190612111236.99538-2-pasic@linux.ibm.com> From: Michael Mueller Date: Thu, 13 Jun 2019 10:09:01 +0200 MIME-Version: 1.0 In-Reply-To: <20190612111236.99538-2-pasic@linux.ibm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Message-Id: <38e3b7bf-1882-a12d-073c-04d888ed2432@linux.ibm.com> Sender: linux-s390-owner@vger.kernel.org List-ID: To: Halil Pasic , kvm@vger.kernel.org, linux-s390@vger.kernel.org, Cornelia Huck , Sebastian Ott , Heiko Carstens Cc: virtualization@lists.linux-foundation.org, "Michael S. Tsirkin" , Christoph Hellwig , Thomas Huth , Christian Borntraeger , Viktor Mihajlovski , Vasily Gorbik , Janosch Frank , Claudio Imbrenda , Farhan Ali , Eric Farman , "Jason J. Herne" On 12.06.19 13:12, Halil Pasic wrote: > On s390, protected virtualization guests have to use bounced I/O > buffers. That requires some plumbing. sed 's/ / /' > > Let us make sure, any device that uses DMA API with direct ops correctly > is spared from the problems, that a hypervisor attempting I/O to a > non-shared page would bring. That sentence reads pretty cumbersome. > > Signed-off-by: Halil Pasic > Reviewed-by: Claudio Imbrenda > --- > arch/s390/Kconfig | 4 +++ > arch/s390/include/asm/mem_encrypt.h | 18 +++++++++++ > arch/s390/mm/init.c | 47 +++++++++++++++++++++++++++++ > 3 files changed, 69 insertions(+) > create mode 100644 arch/s390/include/asm/mem_encrypt.h > > diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig > index 109243fdb6ec..88d8355b7bf7 100644 > --- a/arch/s390/Kconfig > +++ b/arch/s390/Kconfig > @@ -1,4 +1,7 @@ > # SPDX-License-Identifier: GPL-2.0 > +config ARCH_HAS_MEM_ENCRYPT > + def_bool y > + > config MMU > def_bool y > > @@ -187,6 +190,7 @@ config S390 > select VIRT_CPU_ACCOUNTING > select ARCH_HAS_SCALED_CPUTIME > select HAVE_NMI > + select SWIOTLB > > > config SCHED_OMIT_FRAME_POINTER > diff --git a/arch/s390/include/asm/mem_encrypt.h b/arch/s390/include/asm/mem_encrypt.h > new file mode 100644 > index 000000000000..0898c09a888c > --- /dev/null > +++ b/arch/s390/include/asm/mem_encrypt.h > @@ -0,0 +1,18 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef S390_MEM_ENCRYPT_H__ > +#define S390_MEM_ENCRYPT_H__ > + > +#ifndef __ASSEMBLY__ > + > +#define sme_me_mask 0ULL > + > +static inline bool sme_active(void) { return false; } > +extern bool sev_active(void); > + > +int set_memory_encrypted(unsigned long addr, int numpages); > +int set_memory_decrypted(unsigned long addr, int numpages); > + > +#endif /* __ASSEMBLY__ */ > + > +#endif /* S390_MEM_ENCRYPT_H__ */ > + > diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c > index 14d1eae9fe43..f0bee6af3960 100644 > --- a/arch/s390/mm/init.c > +++ b/arch/s390/mm/init.c > @@ -18,6 +18,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -29,6 +30,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -42,6 +44,8 @@ > #include > #include > #include > +#include > +#include > > pgd_t swapper_pg_dir[PTRS_PER_PGD] __section(.bss..swapper_pg_dir); > > @@ -128,6 +132,47 @@ void mark_rodata_ro(void) > pr_info("Write protected read-only-after-init data: %luk\n", size >> 10); > } > > +int set_memory_encrypted(unsigned long addr, int numpages) > +{ > + int i; > + > + /* make specified pages unshared, (swiotlb, dma_free) */ > + for (i = 0; i < numpages; ++i) { > + uv_remove_shared(addr); > + addr += PAGE_SIZE; > + } > + return 0; > +} > + > +int set_memory_decrypted(unsigned long addr, int numpages) > +{ > + int i; > + /* make specified pages shared (swiotlb, dma_alloca) */ > + for (i = 0; i < numpages; ++i) { > + uv_set_shared(addr); > + addr += PAGE_SIZE; > + } > + return 0; > +} > + > +/* are we a protected virtualization guest? */ > +bool sev_active(void) > +{ > + return is_prot_virt_guest(); > +} > + > +/* protected virtualization */ > +static void pv_init(void) > +{ > + if (!is_prot_virt_guest()) > + return; > + > + /* make sure bounce buffers are shared */ > + swiotlb_init(1); > + swiotlb_update_mem_attributes(); > + swiotlb_force = SWIOTLB_FORCE; > +} > + > void __init mem_init(void) > { > cpumask_set_cpu(0, &init_mm.context.cpu_attach_mask); > @@ -136,6 +181,8 @@ void __init mem_init(void) > set_max_mapnr(max_low_pfn); > high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); > > + pv_init(); > + > /* Setup guest page hinting */ > cmma_init(); > > Reviewed-by: Michael Mueller Michael