From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by kanga.kvack.org (Postfix) with ESMTP id 220816B2447 for ; Wed, 21 Nov 2018 00:29:12 -0500 (EST) Received: by mail-ed1-f72.google.com with SMTP id x15so2468893edd.2 for ; Tue, 20 Nov 2018 21:29:12 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id p3si247577edx.140.2018.11.20.21.29.09 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Nov 2018 21:29:10 -0800 (PST) Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wAL5T7a4031146 for ; Wed, 21 Nov 2018 00:29:08 -0500 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0b-001b2d01.pphosted.com with ESMTP id 2nw001tchd-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 21 Nov 2018 00:29:08 -0500 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 21 Nov 2018 05:28:28 -0000 From: Bharata B Rao Subject: [RFC PATCH v2 3/4] kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls Date: Wed, 21 Nov 2018 10:58:10 +0530 In-Reply-To: <20181121052811.4819-1-bharata@linux.ibm.com> References: <20181121052811.4819-1-bharata@linux.ibm.com> Message-Id: <20181121052811.4819-4-bharata@linux.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, Bharata B Rao H_SVM_INIT_START: Initiate securing a VM H_SVM_INIT_DONE: Conclude securing a VM During early guest init, these hcalls will be issued by UV. As part of these hcalls, [un]register memslots with UV. Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/hvcall.h | 2 ++ arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/include/asm/kvm_ppc.h | 12 ++++++++++ arch/powerpc/include/asm/ucall-api.h | 6 +++++ arch/powerpc/kvm/book3s_hv.c | 6 +++++ arch/powerpc/kvm/book3s_hv_hmm.c | 33 ++++++++++++++++++++++++++++ 6 files changed, 60 insertions(+) diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index 34791c627f87..4872b044cca8 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -342,6 +342,8 @@ /* Platform-specific hcalls used by the Ultravisor */ #define H_SVM_PAGE_IN 0xFF00 #define H_SVM_PAGE_OUT 0xFF04 +#define H_SVM_INIT_START 0xFF08 +#define H_SVM_INIT_DONE 0xFF0C /* Values for 2nd argument to H_SET_MODE */ #define H_SET_MODE_RESOURCE_SET_CIABR 1 diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 729bdea22250..174aa7e30ff7 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -296,6 +296,7 @@ struct kvm_arch { struct dentry *htab_dentry; struct dentry *radix_dentry; struct kvm_resize_hpt *resize_hpt; /* protected by kvm->lock */ + bool secure; /* Indicates H_SVM_INIT_START has been called */ #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE struct mutex hpt_mutex; diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 659c80982497..5f4b6a73789f 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -919,6 +919,8 @@ extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gra, unsigned long flags, unsigned long page_shift); +extern unsigned long kvmppc_h_svm_init_start(struct kvm *kvm); +extern unsigned long kvmppc_h_svm_init_done(struct kvm *kvm); #else static inline unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned int lpid, @@ -935,5 +937,15 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned int lpid, { return H_UNSUPPORTED; } + +static inline unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) +{ + return H_UNSUPPORTED; +} + +static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) +{ + return H_UNSUPPORTED; +} #endif #endif /* __POWERPC_KVM_PPC_H__ */ diff --git a/arch/powerpc/include/asm/ucall-api.h b/arch/powerpc/include/asm/ucall-api.h index a84dc2abd172..347637995b1b 100644 --- a/arch/powerpc/include/asm/ucall-api.h +++ b/arch/powerpc/include/asm/ucall-api.h @@ -19,4 +19,10 @@ static inline int uv_page_out(u64 lpid, u64 dst_ra, u64 src_gpa, u64 flags, return U_SUCCESS; } +static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size, + u64 flags, u64 slotid) +{ + return 0; +} + #endif /* _ASM_POWERPC_UCALL_API_H */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7e413605e7c4..d7aa85330016 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1006,6 +1006,12 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) kvmppc_get_gpr(vcpu, 6), kvmppc_get_gpr(vcpu, 7)); break; + case H_SVM_INIT_START: + ret = kvmppc_h_svm_init_start(vcpu->kvm); + break; + case H_SVM_INIT_DONE: + ret = kvmppc_h_svm_init_done(vcpu->kvm); + break; default: return RESUME_HOST; diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c index 2730ab832330..e138b0edee9f 100644 --- a/arch/powerpc/kvm/book3s_hv_hmm.c +++ b/arch/powerpc/kvm/book3s_hv_hmm.c @@ -55,6 +55,39 @@ struct kvmppc_hmm_migrate_args { unsigned long page_shift; }; +unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int ret = H_SUCCESS; + int srcu_idx; + + srcu_idx = srcu_read_lock(&kvm->srcu); + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) { + ret = uv_register_mem_slot(kvm->arch.lpid, + memslot->base_gfn << PAGE_SHIFT, + memslot->npages * PAGE_SIZE, + 0, memslot->id); + if (ret < 0) { + ret = H_PARAMETER; /* TODO: proper retval */ + goto out; + } + } + kvm->arch.secure = true; +out: + srcu_read_unlock(&kvm->srcu, srcu_idx); + return ret; +} + +unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) +{ + if (kvm->arch.secure) + return H_SUCCESS; + else + return H_UNSUPPORTED; +} + #define KVMPPC_PFN_HMM (0x1ULL << 61) static inline bool kvmppc_is_hmm_pfn(unsigned long pfn) -- 2.17.1