From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A43DA466B4E; Tue, 28 Apr 2026 15:57:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.158.5 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777391844; cv=none; b=OW+Ze5V8d+X7JCpEzheu8S2vlk8n4HDL9fSPzZW4iFBz8q6FNJUnCcjjXePtCeMIFjzHfpYUemjEnXCUKR5EZaYoYMc/CdtxkiLi1v4Ngr/b3Wh+rc5/YNkGxICUbtQzJP5HPbLd6f6QwYo/CMrEzvTjP4Y9cladjw+45VhQl6U= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777391844; c=relaxed/simple; bh=ObwfFGRphQ13mkyOHTMspp9hf9HWVfJsaNBvb6E83sE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=m4s5TABMkC9Vmlqtwe4S8nrscpM20XLlk5j5H8RW6vrZHC2SofCuNX9w6r0/gURbhUN37EwXapVuw4vdLzidFjlrB2+cBx5RxhqIaieVDVXF3zSk8jlNLqsWj+A67PDjXf7zG2Ps9PX1xnsBqqG2+cpR+HNYL+DWnIGbMWL32Aw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=HdZ9cPbL; arc=none smtp.client-ip=148.163.158.5 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="HdZ9cPbL" Received: from pps.filterd (m0353725.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 63S6Gpr73591721; Tue, 28 Apr 2026 15:56:37 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=70TCS0+s1BCejrRCo sTCo903U1lWFHqC550aLg29Jck=; b=HdZ9cPbLPIOZd8WrheS0ntg03o0G7BDN6 t0ihsi86jpwUM7uSl2BcjmsgPBhmI14QdKGN8EB/rlAjl5xHnu1KZfdKIgqYyy5J S5h9p8C4nV4416jE3cXkQZwHbna62G+B/0nMIzPzd6S5WF4K1uHZ//fYWlCRorJo mx5c9EjLCIFKxgOTLV4J3EsVOEAv3xcEnUIRvLBgXqrrZ6Wa9fxdvyWbOvw/r7Uw WZUBLnEAmrQV7RePTd+oY6FdUg9VNbRWU6PeWdZnp9DiZipNsZMXYQrh57ZvGfIB F7BkC55i/+tuvS1rPNpqFI6X9sX6Ff97vXGOqWRyR32mNk4Jug6Iw== Received: from ppma22.wdc07v.mail.ibm.com (5c.69.3da9.ip4.static.sl-reverse.com [169.61.105.92]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 4drm1dw7vj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Apr 2026 15:56:37 +0000 (GMT) Received: from pps.filterd (ppma22.wdc07v.mail.ibm.com [127.0.0.1]) by ppma22.wdc07v.mail.ibm.com (8.18.1.7/8.18.1.7) with ESMTP id 63SFroep015569; Tue, 28 Apr 2026 15:56:37 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma22.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4ds8avtg23-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Apr 2026 15:56:36 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 63SFuWKM31785566 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 28 Apr 2026 15:56:32 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B745A20040; Tue, 28 Apr 2026 15:56:32 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6585B2004E; Tue, 28 Apr 2026 15:56:32 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.87.85.9]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 28 Apr 2026 15:56:32 +0000 (GMT) From: Steffen Eiden To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: Andreas Grapentin , Arnd Bergmann , Catalin Marinas , Christian Borntraeger , Claudio Imbrenda , David Hildenbrand , Gautam Gala , Hendrik Brueckner , Janosch Frank , Joey Gouly , Marc Zyngier , Nina Schoetterl-Glausch , Oliver Upton , Paolo Bonzini , Suzuki K Poulose , Ulrich Weigand , Will Deacon , Zenghui Yu Subject: [PATCH v2 26/28] KVM: s390: arm64: Implement vCPU IOCTLs Date: Tue, 28 Apr 2026 17:56:18 +0200 Message-ID: <20260428155622.1361364-27-seiden@linux.ibm.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260428155622.1361364-1-seiden@linux.ibm.com> References: <20260428155622.1361364-1-seiden@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-s390@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: _xuHXsaIwVjD7t7eOag8nuMG75-NjNIa X-Proofpoint-Spam-Details-Enc: AW1haW4tMjYwNDI4MDE1MSBTYWx0ZWRfXxv3XcZm06e0v twYeRtXraL3Uo60mLLenlxE/xXL65hrGfxYTBUQP/pkUv9WVkdNOyLE4DkVw86LyUrb+mE0z35b RmQv29tASAGh7Tz8zKl/VxDmY9gdIyRXEaabcwdnjm81JvwxvVF+aqRIWtC/fHGbZ3QfehmlaZL N56eSJiX2cJkhwRrIqiu4xM0bPbNyOtNZgH1iml2xPbxg2U2p8uPjrhbairdWGtXU4DLaiH3LgI diRmu3hE2zYnpPY5y0+0Y54aA71g5hIrxgsASPzemwgpHxNXAKpA83dUWe8D3jHDXrN8DHgedTJ CgrXP62jEfQrgDouR/YMwCHTTg1DnhnEJ30Ec8ocTYM/peXQ6evSvLPA3fk7Rd+ohbMk8z9IbPt 9AiUxwqQik5u1MUMprUlhAnrP0p2xdYKe/svJoO+GMOPtCFKo7AsVUcxGWSyvtRAbxeo+XUkFzk GT4fUItlInbewPXQDNg== X-Authority-Analysis: v=2.4 cv=VZLH+lp9 c=1 sm=1 tr=0 ts=69f0d8b5 cx=c_pps a=5BHTudwdYE3Te8bg5FgnPg==:117 a=5BHTudwdYE3Te8bg5FgnPg==:17 a=A5OVakUREuEA:10 a=VkNPw1HP01LnGYTKEx00:22 a=RnoormkPH1_aCDwRdu11:22 a=V8glGbnc2Ofi9Qvn3v5h:22 a=VnNF1IyMAAAA:8 a=duHCEdAOebdA3afscLwA:9 X-Proofpoint-GUID: _xuHXsaIwVjD7t7eOag8nuMG75-NjNIa X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1143,Hydra:6.1.51,FMLib:17.12.100.49 definitions=2026-04-28_05,2026-04-28_01,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 suspectscore=0 adultscore=0 lowpriorityscore=0 phishscore=0 spamscore=0 malwarescore=0 bulkscore=0 priorityscore=1501 impostorscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2604200000 definitions=main-2604280151 Implement all vCPU IOCTLs. Co-developed-by: Andreas Grapentin Signed-off-by: Andreas Grapentin Co-developed-by: Nina Schoetterl-Glausch Signed-off-by: Nina Schoetterl-Glausch Signed-off-by: Steffen Eiden --- arch/s390/kvm/arm64/arm.c | 354 ++++++++++++++++++++++++++++++++++++ arch/s390/kvm/arm64/guest.c | 71 +++++++- arch/s390/kvm/arm64/guest.h | 5 + arch/s390/kvm/arm64/reset.c | 45 +++++ arch/s390/kvm/arm64/reset.h | 11 ++ 5 files changed, 484 insertions(+), 2 deletions(-) create mode 100644 arch/s390/kvm/arm64/reset.c create mode 100644 arch/s390/kvm/arm64/reset.h diff --git a/arch/s390/kvm/arm64/arm.c b/arch/s390/kvm/arm64/arm.c index 77bc4a8841df..b629bef84eda 100644 --- a/arch/s390/kvm/arm64/arm.c +++ b/arch/s390/kvm/arm64/arm.c @@ -8,9 +8,17 @@ #include #include +#include +#include +#include + +#include +#include + #include #include "arm.h" +#include "guest.h" #include "reset.h" int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) @@ -168,6 +176,22 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) { } +void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + save_access_regs(&vcpu->arch.host_acrs[0]); + vcpu->cpu = cpu; + + lasrm(&vcpu->arch.save_area); +} + +void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) +{ + stiasrm(&vcpu->arch.save_area); + + vcpu->cpu = -1; + restore_access_regs(&vcpu->arch.host_acrs[0]); +} + int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, struct kvm_mp_state *mp_state) { @@ -191,12 +215,342 @@ unsigned long system_supported_vcpu_features(void) return KVM_VCPU_VALID_FEATURES; } +bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) +{ + return vcpu_mode_priv(vcpu); +} + +int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) +{ + if (!kvm_vcpu_initialized(vcpu)) + return -ENOEXEC; + + if (!kvm_arm_vcpu_is_finalized(vcpu)) + return -EPERM; + + if (likely(READ_ONCE(vcpu->pid))) + return 0; + + return 0; +} + +/** + * check_vcpu_requests - check and handle pending vCPU requests + * @vcpu: the VCPU pointer + * + * Return: 1 if we should enter the guest + * 0 if we should exit to userspace + * < 0 if we should exit to userspace, where the return value indicates + * an error + */ +static int check_vcpu_requests(struct kvm_vcpu *vcpu) +{ + if (kvm_request_pending(vcpu)) { + if (kvm_check_request(KVM_REQ_VCPU_RESET, vcpu)) + kvm_reset_vcpu(vcpu); + /* + * Clear IRQ_PENDING requests that were made to guarantee + * that a VCPU sees new virtual interrupts. + */ + kvm_check_request(KVM_REQ_IRQ_PENDING, vcpu); + } + + return 1; +} + +static int kvm_vcpu_initialize(struct kvm_vcpu *vcpu, + const struct kvm_vcpu_init *init) +{ + unsigned long features = init->features[0]; + struct kvm *kvm = vcpu->kvm; + int ret = -EINVAL; + + mutex_lock(&kvm->arch.config_lock); + + if (test_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags) && + kvm_vcpu_init_changed(vcpu, init)) + goto out_unlock; + + bitmap_copy(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES); + + kvm_reset_vcpu(vcpu); + + set_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags); + vcpu_set_flag(vcpu, VCPU_INITIALIZED); + + ret = 0; +out_unlock: + mutex_unlock(&kvm->arch.config_lock); + return ret; +} + +static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu, + const struct kvm_vcpu_init *init) +{ + int ret; + + if (init->target != KVM_ARM_TARGET_GENERIC_V8) + return -EINVAL; + + ret = kvm_vcpu_init_check_features(vcpu, init); + if (ret) + return ret; + + if (!kvm_vcpu_initialized(vcpu)) + return kvm_vcpu_initialize(vcpu, init); + + if (kvm_vcpu_init_changed(vcpu, init)) + return -EINVAL; + + kvm_reset_vcpu(vcpu); + + return 0; +} + +static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, + struct kvm_vcpu_init *init) +{ + struct kvm_sae_save_area *save_area = &vcpu->arch.save_area; + struct kvm_sae_block *sae_block = &vcpu->arch.sae_block; + int ret; + + sae_block->save_area = virt_to_phys(save_area); + save_area->sdo = virt_to_phys(sae_block); + + vcpu_load(vcpu); + + ret = kvm_vcpu_set_target(vcpu, init); + if (ret) + goto out_put; + + vcpu_reset_hcr(vcpu); + + spin_lock(&vcpu->arch.mp_state_lock); + WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE); + spin_unlock(&vcpu->arch.mp_state_lock); + + ret = 0; +out_put: + vcpu_put(vcpu); + return ret; +} + int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, bool line_status) { return 0; } +static void adjust_pc(struct kvm_vcpu *vcpu) +{ + if (vcpu_get_flag(vcpu, INCREMENT_PC)) { + kvm_skip_instr(vcpu); + vcpu_clear_flag(vcpu, INCREMENT_PC); + } +} + +static void arm_vcpu_run(struct kvm_vcpu *vcpu) +{ + struct kvm_sae_block *sae_block = &vcpu->arch.sae_block; + + adjust_pc(vcpu); + + local_irq_disable(); + guest_enter_irqoff(); + local_irq_enable(); + + sae_block->icptr = 0; + + sae64a(sae_block); + + local_irq_disable(); + guest_exit_irqoff(); + local_irq_enable(); +} + +/** kvm_arch_vcpu_ioctl_run() - run arm64 vCPU + * + * Execute arm64 guest instructions using SAE. + * + * Returns: + * 1 enter the guest (should not be observed by userspace) + * 0 exit to userspace + * < 0 exit to userspace, where the return value indicates n error + * + * + */ +int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) +{ + struct kvm_run *kvm_run = vcpu->run; + u8 icptr; + int ret; + + if (kvm_run->exit_reason == KVM_EXIT_MMIO) { + ret = kvm_handle_mmio_return(vcpu); + if (ret <= 0) + return ret; + } + + vcpu_load(vcpu); + + if (!vcpu->wants_to_run) { + ret = -EINTR; + goto out; + } + + kvm_sigset_activate(vcpu); + + might_fault(); + + ret = 1; + do { + if (signal_pending(current)) { + kvm_run->exit_reason = KVM_EXIT_INTR; + ret = -EINTR; + continue; + } + + if (need_resched()) + schedule(); + + if (ret > 0) + ret = check_vcpu_requests(vcpu); + + vcpu->arch.sae_block.icptr = 0; + + arm_vcpu_run(vcpu); + + icptr = vcpu->arch.sae_block.icptr; + switch (icptr) { + case SAE_ICPTR_SPURIOUS: + break; + case SAE_ICPTR_VALIDITY: + WARN_ONCE(true, "SAE: validity intercept. vir: 0x%04x", + vcpu->arch.sae_block.vir); + ret = -EINVAL; + break; + case SAE_ICPTR_SYNCHRONOUS_EXCEPTION: + ret = handle_trap_exceptions(vcpu); + break; + default: + WARN_ONCE(true, "SAE: unknown interception reason 0x%02x", icptr); + ret = -EINVAL; + } + } while (ret > 0); + + kvm_sigset_deactivate(vcpu); +out: + if (unlikely(vcpu_get_flag(vcpu, INCREMENT_PC))) + adjust_pc(vcpu); + + vcpu_put(vcpu); + + return ret; +} + +long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) +{ + struct kvm_vcpu *vcpu = filp->private_data; + void __user *argp = (void __user *)arg; + struct kvm_device_attr attr; + int ret; + + switch (ioctl) { + case KVM_ARM_VCPU_INIT: { + struct kvm_vcpu_init init; + + ret = -EFAULT; + if (copy_from_user(&init, argp, sizeof(init))) + break; + + ret = kvm_arch_vcpu_ioctl_vcpu_init(vcpu, &init); + break; + } + case KVM_SET_ONE_REG: + case KVM_GET_ONE_REG: { + struct kvm_one_reg reg; + + ret = -ENOEXEC; + if (unlikely(!kvm_vcpu_initialized(vcpu))) + break; + + ret = -EFAULT; + if (copy_from_user(®, argp, sizeof(reg))) + break; + + if (kvm_check_request(KVM_REQ_VCPU_RESET, vcpu)) + kvm_reset_vcpu(vcpu); + + if (ioctl == KVM_SET_ONE_REG) + ret = kvm_arm_set_reg(vcpu, ®); + else + ret = kvm_arm_get_reg(vcpu, ®); + break; + } + case KVM_GET_REG_LIST: { + struct kvm_reg_list __user *user_list = argp; + struct kvm_reg_list reg_list; + unsigned int n; + + ret = -ENOEXEC; + if (unlikely(!kvm_vcpu_initialized(vcpu))) + break; + ret = -EPERM; + if (!kvm_arm_vcpu_is_finalized(vcpu)) + break; + ret = -EFAULT; + if (copy_from_user(®_list, user_list, sizeof(reg_list))) + break; + n = reg_list.n; + reg_list.n = kvm_arm_num_regs(vcpu); + if (copy_to_user(user_list, ®_list, sizeof(reg_list))) + break; + ret = -E2BIG; + if (n < reg_list.n) + break; + ret = kvm_arm_copy_reg_indices(vcpu, user_list->reg); + break; + } + case KVM_ARM_VCPU_FINALIZE: { + int what; + + if (!kvm_vcpu_initialized(vcpu)) + return -ENOEXEC; + + if (get_user(what, (const int __user *)argp)) + return -EFAULT; + + ret = kvm_arm_vcpu_finalize(vcpu, what); + break; + } + case KVM_SET_DEVICE_ATTR: { + ret = -EFAULT; + if (copy_from_user(&attr, argp, sizeof(attr))) + break; + ret = kvm_arm_vcpu_set_attr(vcpu, &attr); + break; + } + case KVM_GET_DEVICE_ATTR: { + ret = -EFAULT; + if (copy_from_user(&attr, argp, sizeof(attr))) + break; + ret = kvm_arm_vcpu_get_attr(vcpu, &attr); + break; + } + case KVM_HAS_DEVICE_ATTR: { + ret = -EFAULT; + if (copy_from_user(&attr, argp, sizeof(attr))) + break; + ret = kvm_arm_vcpu_has_attr(vcpu, &attr); + break; + } + default: + ret = -EINVAL; + } + + return ret; +} + int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) { diff --git a/arch/s390/kvm/arm64/guest.c b/arch/s390/kvm/arm64/guest.c index 00886755accf..893d48037292 100644 --- a/arch/s390/kvm/arm64/guest.c +++ b/arch/s390/kvm/arm64/guest.c @@ -4,7 +4,7 @@ #include "guest.h" -const struct _kvm_stats_desc kvm_vm_stats_desc[] = { +const struct kvm_stats_desc kvm_vm_stats_desc[] = { KVM_GENERIC_VM_STATS() }; @@ -17,7 +17,7 @@ const struct kvm_stats_header kvm_vm_stats_header = { sizeof(kvm_vm_stats_desc), }; -const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = { +const struct kvm_stats_desc kvm_vcpu_stats_desc[] = { KVM_GENERIC_VCPU_STATS(), /* ARM64 stats */ STATS_DESC_COUNTER(VCPU, hvc_exit_stat), @@ -50,6 +50,73 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) return num_core_regs(vcpu); } +int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + /* We currently use nothing arch-specific in upper 32 bits */ + if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32) + return -EINVAL; + + switch (reg->id & KVM_REG_ARM_COPROC_MASK) { + case KVM_REG_ARM_CORE: + return get_core_reg(vcpu, reg); + default: + return -EINVAL; + } +} + +int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + /* We currently use nothing arch-specific in upper 32 bits */ + if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32) + return -EINVAL; + + switch (reg->id & KVM_REG_ARM_COPROC_MASK) { + case KVM_REG_ARM_CORE: + return set_core_reg(vcpu, reg); + default: + return -EINVAL; + } +} + +int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) +{ + int ret; + + switch (attr->group) { + default: + ret = -ENXIO; + break; + } + + return ret; +} + +int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) +{ + int ret; + + switch (attr->group) { + default: + ret = -ENXIO; + break; + } + + return ret; +} + +int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr) +{ + int ret; + + switch (attr->group) { + default: + ret = -ENXIO; + break; + } + + return ret; +} + int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { return -EINVAL; diff --git a/arch/s390/kvm/arm64/guest.h b/arch/s390/kvm/arm64/guest.h index db635d513c2c..847489fb81be 100644 --- a/arch/s390/kvm/arm64/guest.h +++ b/arch/s390/kvm/arm64/guest.h @@ -6,5 +6,10 @@ #include unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); +int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); +int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); +int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); #endif /* KVM_ARM_GUEST_H */ diff --git a/arch/s390/kvm/arm64/reset.c b/arch/s390/kvm/arm64/reset.c new file mode 100644 index 000000000000..9a12d5f19f6a --- /dev/null +++ b/arch/s390/kvm/arm64/reset.c @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include + +#include "reset.h" + +bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) +{ + return true; +} + +void kvm_reset_vcpu(struct kvm_vcpu *vcpu) +{ + struct vcpu_reset_state reset_state; + + spin_lock(&vcpu->arch.mp_state_lock); + reset_state = vcpu->arch.reset_state; + vcpu->arch.reset_state.reset = false; + spin_unlock(&vcpu->arch.mp_state_lock); + + /* + * disable preemption around the vcpu reset as we might otherwise race with + * preempt notifiers which call stiasrm/lasrm from put/load + */ + preempt_disable(); + + kvm_reset_vcpu_core_regs(vcpu); + + if (reset_state.reset) { + *vcpu_pc(vcpu) = reset_state.pc; + vcpu_clear_flag(vcpu, PENDING_EXCEPTION); + vcpu_clear_flag(vcpu, EXCEPT_MASK); + vcpu_clear_flag(vcpu, INCREMENT_PC); + vcpu_set_reg(vcpu, 0, reset_state.r0); + } + + preempt_enable(); +} + +int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) +{ + return 0; +} diff --git a/arch/s390/kvm/arm64/reset.h b/arch/s390/kvm/arm64/reset.h new file mode 100644 index 000000000000..a5c5304e47bc --- /dev/null +++ b/arch/s390/kvm/arm64/reset.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef KVM_ARM_RESET_H +#define KVM_ARM_RESET_H + +#include + +bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu); +void kvm_reset_vcpu(struct kvm_vcpu *vcpu); +int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); + +#endif /* KVM_ARM_RESET_H */ -- 2.51.0