From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 789BAC07E96 for ; Thu, 15 Jul 2021 16:35:54 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4358E613E4 for ; Thu, 15 Jul 2021 16:35:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4358E613E4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L1RS2R70vzc6iA0YDKcut+DVm9AMAmiz9h7lJc1BD5U=; b=TXZCIgYUqdlzZb 5XCL80kQBh8pbkhHwclfCcF+QobtyCoRkagE8lPTu1NPxV7rDBudUY9bPVgzuYn94BB/I1bEbK0Xt mDcrnDca5YjL1hfNY8JcD5wKSu9axvfi0f7/SNgtQBI0wM8pRzGyrSkQsi7HUpPRe3KpMsRfxgBuS kRyWeUdbVH6n60+aKGHAGI2OKJdjr3xwcP79Cc7kLq7r+H9mTYWhlyyVLa7xffF0o4umNnI4Q14im +REV+AuyH+w/8jojCW00LIPPrVBRl/euB0aI20LhB1QwW0lWZnStYB7cS/4de5CDQT9EzEnWA3mzu XKZKXgiBbKlbh+APFD9A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m44Je-001Ndv-NK; Thu, 15 Jul 2021 16:34:27 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m44HZ-001MVC-TT for linux-arm-kernel@lists.infradead.org; Thu, 15 Jul 2021 16:32:20 +0000 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AC41B613F9; Thu, 15 Jul 2021 16:32:17 +0000 (UTC) Received: from sofa.misterjones.org ([185.219.108.64] helo=why.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1m44HY-00DYjr-32; Thu, 15 Jul 2021 17:32:16 +0100 From: Marc Zyngier To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: will@kernel.org, qperret@google.com, dbrazdil@google.com, Srivatsa Vaddagiri , Shanker R Donthineni , James Morse , Suzuki K Poulose , Alexandru Elisei , kernel-team@android.com Subject: [PATCH 04/16] KVM: arm64: Add MMIO checking infrastructure Date: Thu, 15 Jul 2021 17:31:47 +0100 Message-Id: <20210715163159.1480168-5-maz@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210715163159.1480168-1-maz@kernel.org> References: <20210715163159.1480168-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, will@kernel.org, qperret@google.com, dbrazdil@google.com, vatsa@codeaurora.org, sdonthineni@nvidia.com, james.morse@arm.com, suzuki.poulose@arm.com, alexandru.elisei@arm.com, kernel-team@android.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210715_093218_029938_AE85097C X-CRM114-Status: GOOD ( 17.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce the infrastructure required to identify an IPA region that is expected to be used as an MMIO window. This include mapping, unmapping and checking the regions. Nothing calls into it yet, so no expected functional change. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 2 + arch/arm64/include/asm/kvm_mmu.h | 5 ++ arch/arm64/kvm/mmu.c | 115 ++++++++++++++++++++++++++++++ 3 files changed, 122 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4add6c27251f..914c1b7bb3ad 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -125,6 +125,8 @@ struct kvm_arch { #define KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER 0 /* Memory Tagging Extension enabled for the guest */ #define KVM_ARCH_FLAG_MTE_ENABLED 1 + /* Gues has bought into the MMIO guard extension */ +#define KVM_ARCH_FLAG_MMIO_GUARD 2 unsigned long flags; /* diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index b52c5c4b9a3d..f6b8fc1671b3 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -170,6 +170,11 @@ phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_get_idmap_vector(void); int kvm_mmu_init(u32 *hyp_va_bits); +/* MMIO guard */ +bool kvm_install_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa); +bool kvm_remove_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa); +bool kvm_check_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa); + static inline void *__kvm_vector_slot2addr(void *base, enum arm64_hyp_spectre_vector slot) { diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 3155c9e778f0..638827c8842b 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1120,6 +1120,121 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) kvm_set_pfn_accessed(pte_pfn(pte)); } +#define MMIO_NOTE ('M' << 24 | 'M' << 16 | 'I' << 8 | '0') + +bool kvm_install_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa) +{ + struct kvm_mmu_memory_cache *memcache; + struct kvm_memory_slot *memslot; + int ret, idx; + + if (!test_bit(KVM_ARCH_FLAG_MMIO_GUARD, &vcpu->kvm->arch.flags)) + return false; + + /* Must be page-aligned */ + if (ipa & ~PAGE_MASK) + return false; + + /* + * The page cannot be in a memslot. At some point, this will + * have to deal with device mappings though. + */ + idx = srcu_read_lock(&vcpu->kvm->srcu); + memslot = gfn_to_memslot(vcpu->kvm, ipa >> PAGE_SHIFT); + srcu_read_unlock(&vcpu->kvm->srcu, idx); + + if (memslot) + return false; + + /* Guest has direct access to the GICv2 virtual CPU interface */ + if (irqchip_in_kernel(vcpu->kvm) && + vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2 && + ipa == vcpu->kvm->arch.vgic.vgic_cpu_base) + return true; + + memcache = &vcpu->arch.mmu_page_cache; + if (kvm_mmu_topup_memory_cache(memcache, + kvm_mmu_cache_min_pages(vcpu->kvm))) + return false; + + spin_lock(&vcpu->kvm->mmu_lock); + ret = kvm_pgtable_stage2_annotate(vcpu->arch.hw_mmu->pgt, + ipa, PAGE_SIZE, memcache, + MMIO_NOTE); + spin_unlock(&vcpu->kvm->mmu_lock); + + return ret == 0; +} + +struct s2_walk_data { + kvm_pte_t pteval; + u32 level; +}; + +static int s2_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, void * const arg) +{ + struct s2_walk_data *data = arg; + + data->level = level; + data->pteval = *ptep; + return 0; +} + +/* Assumes mmu_lock taken */ +static bool __check_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa) +{ + struct s2_walk_data data; + struct kvm_pgtable_walker walker = { + .cb = s2_walker, + .flags = KVM_PGTABLE_WALK_LEAF, + .arg = &data, + }; + + kvm_pgtable_walk(vcpu->arch.hw_mmu->pgt, ALIGN_DOWN(ipa, PAGE_SIZE), + PAGE_SIZE, &walker); + + /* Must be a PAGE_SIZE mapping with our annotation */ + return (BIT(ARM64_HW_PGTABLE_LEVEL_SHIFT(data.level)) == PAGE_SIZE && + data.pteval == MMIO_NOTE); +} + +bool kvm_remove_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa) +{ + bool ret; + + if (!test_bit(KVM_ARCH_FLAG_MMIO_GUARD, &vcpu->kvm->arch.flags)) + return false; + + /* Keep the PT locked across the two walks */ + spin_lock(&vcpu->kvm->mmu_lock); + + ret = __check_ioguard_page(vcpu, ipa); + if (ret) /* Drop the annotation */ + kvm_pgtable_stage2_unmap(vcpu->arch.hw_mmu->pgt, + ALIGN_DOWN(ipa, PAGE_SIZE), PAGE_SIZE); + + spin_unlock(&vcpu->kvm->mmu_lock); + return ret; +} + +bool kvm_check_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa) +{ + bool ret; + + if (!test_bit(KVM_ARCH_FLAG_MMIO_GUARD, &vcpu->kvm->arch.flags)) + return true; + + spin_lock(&vcpu->kvm->mmu_lock); + ret = __check_ioguard_page(vcpu, ipa & PAGE_MASK); + spin_unlock(&vcpu->kvm->mmu_lock); + + if (!ret) + kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); + + return ret; +} + /** * kvm_handle_guest_abort - handles all 2nd stage aborts * @vcpu: the VCPU pointer -- 2.30.2 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel