From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D9B2AEE49A4 for ; Sun, 10 Sep 2023 18:19:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Subject:Cc:To:From:Message-ID:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/Vz4McC2zbrYoqekcP+KAHAfRDYjbC4/k2cHjapi70M=; b=lWM6PcIOwhM7Vq l2BWN8HUv1RI9bQPwKRii8blqQZ+fZWW/K5WXYEyfhtncyTi7hPIVJqN9xPe6AQHoof1nkrhidpLb SexsFhPtuTQW1/oUyZ9mX2SnqUB7lK5gzJ0qQEpVjaIxL8tva7pQ6q13+e9HKgEGkLNwbzR93/gZu Pt7JMy7IAKNCvpb5FCWDrLWUe4IrOzNGSp/NDUn+uIkJ5R3eqhtVv/HRkxrudh3wnyzPL4pOkPx7/ OSZgV0DFAY8QUbl/zIVUx9ndcBv5crx0HvLGHm+9bSbHZujHqtXPgtkc3fTilUYU+O1nZuyLcYicr NMWXOfqh5zhqboDRUEuQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qfP1H-00GoWh-1z; Sun, 10 Sep 2023 18:18:51 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qfP1E-00GoW9-25 for linux-arm-kernel@lists.infradead.org; Sun, 10 Sep 2023 18:18:50 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 28041B80C74; Sun, 10 Sep 2023 18:18:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A26BCC433C8; Sun, 10 Sep 2023 18:18:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694369924; bh=ASilHGVZgGzH/f+ZMWTcrcQGEIs4+1IIpikMRP0PMLE=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=nSgkGpXEC1TaKGgH5EDL4geDysjEA5MRO61k1UHKPd/DoAmbuRBqINYW1jGOHezc8 I7PxdZvZkwkid7DXa9UvKCC7ybtOxgwngPiKRwvNq1vTCnqHTzpYTxaPqhVtF0bOtn pjGmtojRoHBcYw+8nnzA9+QibjIzPOqHrHDjq9RreEDzb9RIHesBwpWZ//Z30YkAMo WLFwkPi4PHkAcHe8gGd3rVSV5Dt0I1YN2ekQGuvj2SONeO/EBwfrtkE49oWr6pGA7O pJJI/GOa/xnqyt1BLq0rJ3mgj7nR6SJYqP8rnZ+AhmhRg5oKnSXCyh19WtgTQd7jlU jQvtJgZgbquWQ== Received: from sofa.misterjones.org ([185.219.108.64] helo=wait-a-minute.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qfP18-00BlHD-3Q; Sun, 10 Sep 2023 19:18:42 +0100 Date: Sun, 10 Sep 2023 19:18:37 +0100 Message-ID: <87ledd51tu.wl-maz@kernel.org> From: Marc Zyngier To: Zenghui Yu Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Xu Zhao Subject: Re: [PATCH 4/5] KVM: arm64: vgic-v3: Refactor GICv3 SGI generation In-Reply-To: References: <20230907100931.1186690-1-maz@kernel.org> <20230907100931.1186690-5-maz@kernel.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/28.2 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: zenghui.yu@linux.dev, kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, zhaoxu.35@bytedance.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230910_111848_992362_CB8A84C3 X-CRM114-Status: GOOD ( 39.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Sun, 10 Sep 2023 17:25:36 +0100, Zenghui Yu wrote: > > Hi Marc, > > On 2023/9/7 18:09, Marc Zyngier wrote: > > As we're about to change the way SGIs are sent, start by splitting > > out some of the basic functionnality: instead of intermingling > > functionality > > > the broadcast and non-broadcast cases with the actual SGI generation, > > perform the following cleanups: > > > > - move the SGI queuing into its own helper > > - split the broadcast code from the affinity-driven code > > - replace the mask/shift combinations with FIELD_GET() > > > > The result is much more readable, and paves the way for further > > optimisations. > > Indeed! > > > @@ -1070,19 +1102,30 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) > > { > > struct kvm *kvm = vcpu->kvm; > > struct kvm_vcpu *c_vcpu; > > - u16 target_cpus; > > + unsigned long target_cpus; > > u64 mpidr; > > - int sgi; > > - int vcpu_id = vcpu->vcpu_id; > > - bool broadcast; > > - unsigned long c, flags; > > - > > - sgi = (reg & ICC_SGI1R_SGI_ID_MASK) >> ICC_SGI1R_SGI_ID_SHIFT; > > - broadcast = reg & BIT_ULL(ICC_SGI1R_IRQ_ROUTING_MODE_BIT); > > - target_cpus = (reg & ICC_SGI1R_TARGET_LIST_MASK) >> ICC_SGI1R_TARGET_LIST_SHIFT; > > + u32 sgi; > > + unsigned long c; > > + > > + sgi = FIELD_GET(ICC_SGI1R_SGI_ID_MASK, reg); > > + > > + /* Broadcast */ > > + if (unlikely(reg & BIT_ULL(ICC_SGI1R_IRQ_ROUTING_MODE_BIT))) { > > + kvm_for_each_vcpu(c, c_vcpu, kvm) { > > + /* Don't signal the calling VCPU */ > > + if (c_vcpu == vcpu) > > + continue; > > + > > + vgic_v3_queue_sgi(c_vcpu, sgi, allow_group1); > > + } > > + > > + return; > > + } > > + > > mpidr = SGI_AFFINITY_LEVEL(reg, 3); > > mpidr |= SGI_AFFINITY_LEVEL(reg, 2); > > mpidr |= SGI_AFFINITY_LEVEL(reg, 1); > > + target_cpus = FIELD_GET(ICC_SGI1R_TARGET_LIST_MASK, reg); > > /* > > * We iterate over all VCPUs to find the MPIDRs matching the request. > > @@ -1091,54 +1134,19 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) > > * VCPUs when most of the times we just signal a single VCPU. > > */ > > kvm_for_each_vcpu(c, c_vcpu, kvm) { > > - struct vgic_irq *irq; > > + int level0; > > /* Exit early if we have dealt with all requested CPUs > > */ > > - if (!broadcast && target_cpus == 0) > > + if (target_cpus == 0) > > break; > > - > > - /* Don't signal the calling VCPU */ > > - if (broadcast && c == vcpu_id) > > Unrelated to this patch, but it looks that we were comparing the value > of *vcpu_idx* and vcpu_id to skip the calling VCPU. Huh, well caught. That was definitely a bug that was there for ever, and only you spotted it. Guess I should flag it as a stable candidate. > Is there a rule in KVM that userspace should invoke KVM_CREATE_VCPU > with sequential "vcpu id"s, starting at 0, so that the user-provided > vcpu_id always equals to the KVM-internal vcpu_idx for a given VCPU? I don't think there is any such rule. As far as I can tell, any number will do as long as it is within the range [0, max_vcpu_id). Of course, max_vcpu_id doesn't even exist on arm64. From what I can tell, this is just some random number between 0 and 511 for us (GICv2 notwithstanding). > I asked because it seems that in kvm/arm64 we always use > kvm_get_vcpu(kvm, i) to obtain the kvm_vcpu pointer, even if *i* is > sometimes essentially provided by userspace.. Huh, this is incredibly dodgy. I had a go at a few occurrences (see below), but this is hardly a complete list. > Besides, the refactor itself looks good to me. Cool, thanks! M. diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c index 6dcdae4d38cb..e32c867e7b48 100644 --- a/arch/arm64/kvm/arch_timer.c +++ b/arch/arm64/kvm/arch_timer.c @@ -458,7 +458,7 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level, timer_ctx->irq.level); if (!userspace_irqchip(vcpu->kvm)) { - ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, + ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_idx, timer_irq(timer_ctx), timer_ctx->irq.level, timer_ctx); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a3b13281d38a..1f7b074b81df 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -439,9 +439,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) * We might get preempted before the vCPU actually runs, but * over-invalidation doesn't affect correctness. */ - if (*last_ran != vcpu->vcpu_id) { + if (*last_ran != vcpu->vcpu_idx) { kvm_call_hyp(__kvm_flush_cpu_context, mmu); - *last_ran = vcpu->vcpu_id; + *last_ran = vcpu->vcpu_idx; } vcpu->cpu = cpu; @@ -1207,7 +1207,7 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, if (vcpu_idx >= nrcpus) return -EINVAL; - vcpu = kvm_get_vcpu(kvm, vcpu_idx); + vcpu = kvm_get_vcpu_by_id(kvm, vcpu_idx); if (!vcpu) return -EINVAL; @@ -1222,14 +1222,14 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, if (vcpu_idx >= nrcpus) return -EINVAL; - vcpu = kvm_get_vcpu(kvm, vcpu_idx); + vcpu = kvm_get_vcpu_by_id(kvm, vcpu_idx); if (!vcpu) return -EINVAL; if (irq_num < VGIC_NR_SGIS || irq_num >= VGIC_NR_PRIVATE_IRQS) return -EINVAL; - return kvm_vgic_inject_irq(kvm, vcpu->vcpu_id, irq_num, level, NULL); + return kvm_vgic_inject_irq(kvm, vcpu->vcpu_idx, irq_num, level, NULL); case KVM_ARM_IRQ_TYPE_SPI: if (!irqchip_in_kernel(kvm)) return -ENXIO; diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 6b066e04dc5d..4448940b6d79 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -348,7 +348,7 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu) pmu->irq_level = overflow; if (likely(irqchip_in_kernel(vcpu->kvm))) { - int ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, + int ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_idx, pmu->irq_num, overflow, pmu); WARN_ON(ret); } diff --git a/arch/arm64/kvm/vgic/vgic-debug.c b/arch/arm64/kvm/vgic/vgic-debug.c index 07aa0437125a..85606a531dc3 100644 --- a/arch/arm64/kvm/vgic/vgic-debug.c +++ b/arch/arm64/kvm/vgic/vgic-debug.c @@ -166,7 +166,7 @@ static void print_header(struct seq_file *s, struct vgic_irq *irq, if (vcpu) { hdr = "VCPU"; - id = vcpu->vcpu_id; + id = vcpu->vcpu_idx; } seq_printf(s, "\n"); @@ -212,7 +212,7 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq, " %2d " "\n", type, irq->intid, - (irq->target_vcpu) ? irq->target_vcpu->vcpu_id : -1, + (irq->target_vcpu) ? irq->target_vcpu->vcpu_idx : -1, pending, irq->line_level, irq->active, @@ -224,7 +224,7 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq, irq->mpidr, irq->source, irq->priority, - (irq->vcpu) ? irq->vcpu->vcpu_id : -1); + (irq->vcpu) ? irq->vcpu->vcpu_idx : -1); } static int vgic_debug_show(struct seq_file *s, void *v) diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c index 212b73a715c1..82b264ad68c4 100644 --- a/arch/arm64/kvm/vgic/vgic-kvm-device.c +++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c @@ -345,7 +345,7 @@ int vgic_v2_parse_attr(struct kvm_device *dev, struct kvm_device_attr *attr, if (cpuid >= atomic_read(&dev->kvm->online_vcpus)) return -EINVAL; - reg_attr->vcpu = kvm_get_vcpu(dev->kvm, cpuid); + reg_attr->vcpu = kvm_get_vcpu_by_id(dev->kvm, cpuid); reg_attr->addr = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK; return 0; -- Without deviation from the norm, progress is not possible. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel