From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64C3AC3A59F for ; Thu, 29 Aug 2019 10:54:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3A3A622CF5 for ; Thu, 29 Aug 2019 10:54:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1567076062; bh=7pQ3zLdVbcKZHrcATLCC4qK5jFKHe/pZSZMuUtpgVME=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=ThSo92oT98RpjvIy4MdkY4OLYUg95qz0vX5qprOzbNhbJLfyG4A83V+u3mQ1xHUfa lpp7FH7XL9zvoqjCsXNbJuP+NIdFkOmopfzd/YHNKrP4rldGvw9HYD3pL951k5QMhA 5J6RUsnGnpvkoRonPTU76J+nyzbOcZ+QgYz/3aaY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728098AbfH2Kue (ORCPT ); Thu, 29 Aug 2019 06:50:34 -0400 Received: from mail.kernel.org ([198.145.29.99]:57876 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728079AbfH2Kue (ORCPT ); Thu, 29 Aug 2019 06:50:34 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 207282173E; Thu, 29 Aug 2019 10:50:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1567075832; bh=7pQ3zLdVbcKZHrcATLCC4qK5jFKHe/pZSZMuUtpgVME=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1po/dyOi0EAVgH/Z2L8NrMSjReJSLZonpI5Hzp74vqamTX4ee+1yXkVz6GIqe08cK WNpTEWIp5nRyEbPALXsf/EpfvZozo+8mQcuJUk0LTAoz+QHSIrXauBHn0YuDDn8g5S 99jG0DLFK+afvp/5Q7/LiEL/Q9htU+g1cPRbPBRo= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Andrew Jones , Mark Rutland , Marc Zyngier , Sasha Levin , kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH AUTOSEL 4.19 20/29] KVM: arm/arm64: Only skip MMIO insn once Date: Thu, 29 Aug 2019 06:50:00 -0400 Message-Id: <20190829105009.2265-20-sashal@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190829105009.2265-1-sashal@kernel.org> References: <20190829105009.2265-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Andrew Jones [ Upstream commit 2113c5f62b7423e4a72b890bd479704aa85c81ba ] If after an MMIO exit to userspace a VCPU is immediately run with an immediate_exit request, such as when a signal is delivered or an MMIO emulation completion is needed, then the VCPU completes the MMIO emulation and immediately returns to userspace. As the exit_reason does not get changed from KVM_EXIT_MMIO in these cases we have to be careful not to complete the MMIO emulation again, when the VCPU is eventually run again, because the emulation does an instruction skip (and doing too many skips would be a waste of guest code :-) We need to use additional VCPU state to track if the emulation is complete. As luck would have it, we already have 'mmio_needed', which even appears to be used in this way by other architectures already. Fixes: 0d640732dbeb ("arm64: KVM: Skip MMIO insn after emulation") Acked-by: Mark Rutland Signed-off-by: Andrew Jones Signed-off-by: Marc Zyngier Signed-off-by: Sasha Levin --- virt/kvm/arm/mmio.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/virt/kvm/arm/mmio.c b/virt/kvm/arm/mmio.c index 08443a15e6be8..3caee91bca089 100644 --- a/virt/kvm/arm/mmio.c +++ b/virt/kvm/arm/mmio.c @@ -98,6 +98,12 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) unsigned int len; int mask; + /* Detect an already handled MMIO return */ + if (unlikely(!vcpu->mmio_needed)) + return 0; + + vcpu->mmio_needed = 0; + if (!run->mmio.is_write) { len = run->mmio.len; if (len > sizeof(unsigned long)) @@ -200,6 +206,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, run->mmio.is_write = is_write; run->mmio.phys_addr = fault_ipa; run->mmio.len = len; + vcpu->mmio_needed = 1; if (!ret) { /* We handled the access successfully in the kernel. */ -- 2.20.1