From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FAF9C3A59F for ; Thu, 29 Aug 2019 10:52:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 45ED623407 for ; Thu, 29 Aug 2019 10:52:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1567075959; bh=TRr3eqjeN9vKj8xC4jj12Yb6FOLAm7jqEyir9I4h9EI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=TJEqBqXId5dhGsoNSpeKciTbMaNlXlAr5d/fmPLJnXFj6up5k6sz3azGihIZR8rAf R81+sJaA5sZgziRlE4FEikvykYHNE9TlBH/yPXdgde1HGRWdrpijedACIAHsxYlbqF Qb41X5K2dWYgPThlKIauHc/ZfItmOu/eOA2jMihg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728499AbfH2KvK (ORCPT ); Thu, 29 Aug 2019 06:51:10 -0400 Received: from mail.kernel.org ([198.145.29.99]:58944 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727008AbfH2KvK (ORCPT ); Thu, 29 Aug 2019 06:51:10 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 70A012341B; Thu, 29 Aug 2019 10:51:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1567075869; bh=TRr3eqjeN9vKj8xC4jj12Yb6FOLAm7jqEyir9I4h9EI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=htMLhq3mpErmlb4lz7Nt5+vtdCNNxDXEoJmC2xd9byVlDhc3j0iL2/k2DjwQL3765 sa7xvqPr72wZXgsQDIBqzONw2QL+SMZGeWUslsjS6RwbTjJjScUh2kZWbjbQHbh4oK rAJlwamAToVKHOyjIm1Xf3TFwJtJujZoF/LpkJUg= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Andrew Jones , Mark Rutland , Marc Zyngier , Sasha Levin , kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Subject: [PATCH AUTOSEL 4.9 7/8] KVM: arm/arm64: Only skip MMIO insn once Date: Thu, 29 Aug 2019 06:50:59 -0400 Message-Id: <20190829105100.2649-7-sashal@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190829105100.2649-1-sashal@kernel.org> References: <20190829105100.2649-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Andrew Jones [ Upstream commit 2113c5f62b7423e4a72b890bd479704aa85c81ba ] If after an MMIO exit to userspace a VCPU is immediately run with an immediate_exit request, such as when a signal is delivered or an MMIO emulation completion is needed, then the VCPU completes the MMIO emulation and immediately returns to userspace. As the exit_reason does not get changed from KVM_EXIT_MMIO in these cases we have to be careful not to complete the MMIO emulation again, when the VCPU is eventually run again, because the emulation does an instruction skip (and doing too many skips would be a waste of guest code :-) We need to use additional VCPU state to track if the emulation is complete. As luck would have it, we already have 'mmio_needed', which even appears to be used in this way by other architectures already. Fixes: 0d640732dbeb ("arm64: KVM: Skip MMIO insn after emulation") Acked-by: Mark Rutland Signed-off-by: Andrew Jones Signed-off-by: Marc Zyngier Signed-off-by: Sasha Levin --- arch/arm/kvm/mmio.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c index 08443a15e6be8..3caee91bca089 100644 --- a/arch/arm/kvm/mmio.c +++ b/arch/arm/kvm/mmio.c @@ -98,6 +98,12 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) unsigned int len; int mask; + /* Detect an already handled MMIO return */ + if (unlikely(!vcpu->mmio_needed)) + return 0; + + vcpu->mmio_needed = 0; + if (!run->mmio.is_write) { len = run->mmio.len; if (len > sizeof(unsigned long)) @@ -200,6 +206,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, run->mmio.is_write = is_write; run->mmio.phys_addr = fault_ipa; run->mmio.len = len; + vcpu->mmio_needed = 1; if (!ret) { /* We handled the access successfully in the kernel. */ -- 2.20.1