From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ABE2BCF11CB for ; Thu, 10 Oct 2024 10:04:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=mQbSaxO5UfPovZTKeZw/jrifWwdI59Pyel+dtq9m3cM=; b=QiHOrDbAycbXT6QwzvjecBMzHb nLkllRpmZWKye8tDPP9poIriIu/OSuWqDH1mUKFmqtTXDxW8FWDoSIL2poLFb5BMbOssMEd+MSu9R uJlzQkBbAMpMt/Fcl8nUS3XV+UjqsbEVevlVHkHlr4eJKFoeRRKUNrdZtUoTUkcSWixLirKkvWa1H HENLqzCD+muci7IOvSynOnQDWY02OSICzBfIkB/Eso8C2LCGHQz9bbE0ZICG5cK8lXKmGc2GyT8QB Onp/Vf5CzAbCmaFaj/j1bmgiXCGeuQ8QRYN6kuyLmZEbaF4T0jqAS5u9z7k+BQDfZZlA3uvoUewg5 ryAWo25w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1syq1d-0000000CKO9-3OnR; Thu, 10 Oct 2024 10:04:07 +0000 Received: from out-177.mta0.migadu.com ([2001:41d0:1004:224b::b1]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1syopK-0000000C5Ba-3Xx4 for linux-arm-kernel@lists.infradead.org; Thu, 10 Oct 2024 08:47:27 +0000 Date: Thu, 10 Oct 2024 01:47:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1728550032; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=mQbSaxO5UfPovZTKeZw/jrifWwdI59Pyel+dtq9m3cM=; b=L+vgr7raKGk1rtKYTk+htWEqiz00CdKBCI/966IoE18oEvn/LHQ+psYc3+FtkrLqESrsyk Dp2lusikIv3MuCMQ9WUjW2Ki8Y4uJ3aYYgfa0hTgA7QtgmrGN4UwufW4DrIkem4UZ9LGS6 lRs9QSGUJE3L9I8y86+RxCy15YpY9nQ= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Oliver Upton To: Marc Zyngier Cc: Sean Christopherson , kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, Joey Gouly , Suzuki K Poulose , Zenghui Yu , stable@vger.kernel.org, Alexander Potapenko Subject: Re: [PATCH] KVM: arm64: Don't eagerly teardown the vgic on init error Message-ID: References: <20241009183603.3221824-1-maz@kernel.org> <875xq0v1do.wl-maz@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <875xq0v1do.wl-maz@kernel.org> X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241010_014719_641695_A2FF3A26 X-CRM114-Status: GOOD ( 27.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Oct 10, 2024 at 08:54:43AM +0100, Marc Zyngier wrote: > On Thu, 10 Oct 2024 00:27:46 +0100, Oliver Upton wrote: > > Then if we can't register the MMIO region for the distributor > > everything comes crashing down and a vCPU has made it into the KVM_RUN > > loop w/ the VGIC-shaped rug pulled out from under it. There's definitely > > another functional bug here where a vCPU's attempts to poke the > > distributor wind up reaching userspace as MMIO exits. But we can worry > > about that another day. > > I don't think that one is that bad. Userspace got us here, and they > now see an MMIO exit for something that it is not prepared to handle. > Suck it up and die (on a black size M t-shirt, please). LOL, I'll remember that. The situation I have in mind is a bit harder to blame on userspace, though. Supposing that the whole VM was set up correctly, multiple vCPUs entering KVM_RUN concurrently could cause this race and have 'unexpected' MMIO exits go out to userspace. vcpu-0 vcpu-1 ====== ====== kvm_vgic_map_resources() dist->ready = true mutex_unlock(config_lock) kvm_vgic_map_resources() if (vgic_ready()) return 0 < enter guest > typer = writel(0, GICD_CTLR) < data abort > kvm_io_bus_write(...) <= No GICD, out to userspace vgic_register_dist_iodev() A small but stupid window to race with. > > If memory serves, kvm_vgic_map_resources() used to do all of this behind > > the config_lock to cure the race, but that wound up inverting lock > > ordering on srcu. > > Probably something like that. We also used to hold the kvm lock, which > made everything much simpler, but awfully wrong. > > > Note to self: Impose strict ordering on GIC initialization v. vCPU > > creation if/when we get a new flavor of irqchip. > > One of the things we should have done when introducing GICv3 is to > impose that at KVM_DEV_ARM_VGIC_CTRL_INIT, the GIC memory map is > final. I remember some push-back on the QEMU side of things, as they > like to decouple things, but this has proved to be a nightmare. Pushing more of the initialization complexity into userspace feels like the right thing. Since we clearly have no idea what we're doing :) > > The crappy assumption here is kvm_arch_vcpu_run_pid_change() and its > > callees are allowed to destroy VM-scoped structures in error handling. > > I think this is symptomatic of more general issue: we perform VM-wide > configuration in the context of a vcpu. We have tons of this stuff to > paper over the lack of a "this VM is fully configured" barrier. > > I wonder whether we could sidestep things by punting the finalisation > of the VM to a different context (workqueue?) and simply return > -EAGAIN or -EINTR to userspace while we're processing it. That doesn't > solve the "I'm missing parts of the address map and I'm going to die" > part though. Throwing it back at userspace would be nice, but unfortunately for ABI I think we need to block/spin vCPUs in the kernel til the VM is in fully working condition. A fragile userspace could explode for a 'spurious' EAGAIN/EINTR where there wasn't one before. -- Thanks, Oliver