From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E022C32771 for ; Wed, 21 Sep 2022 03:53:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230480AbiIUDxE (ORCPT ); Tue, 20 Sep 2022 23:53:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230373AbiIUDxB (ORCPT ); Tue, 20 Sep 2022 23:53:01 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C15942180D for ; Tue, 20 Sep 2022 20:53:00 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id a2-20020a5b0002000000b006b48689da76so2688764ybp.16 for ; Tue, 20 Sep 2022 20:53:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date; bh=EMcTlhwm+0MXO5tregQXVf8gPEMsmR53PqRHl8pwyyQ=; b=LVzYthgNJ4MJSFyI7Yr+gWoA3J+unXvf2eTvU1YYdV5/swtfLL+UhrykFTqgEpuz0m YMAzov9jSVJ2fPoQvPw7fZnqfZNkTs/5tR7IWGqgV/+cpJog8jmjQjaF9q91aoQCt23t refvXjf2LcK+JeAGUwRveBDd7dGYLskSa2Mn+3OGjOieQFH8QwWEmetbBFTQ7fWzyLvk WI9Fo3SmJJ1EpWnoG91D7UyxtTIAFhbGaSqCm7HtNw4qCjPmkMZhwwIYZrNkSEiOQO6L rLBLgIvT+AzB379+2CrHfzHQhDh7h+7ssmv/PzYZQsaGr/9P7R8/2zyg3mD4ExyqEjbj jt9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=EMcTlhwm+0MXO5tregQXVf8gPEMsmR53PqRHl8pwyyQ=; b=MWllltT1AaT3l4rgXw8tcwzCNjaERpn9L2FEUDEDBPFnPedITXbZPF37ttzWzfc0rc 4RC+aYz48kR/05zWDRpC15AasaL6ArcvUOIc9yNolmM3L3q32RHxvGlAIwKTJZ6wK9tc KSAPHlEE29vIRDSv8R1t/yaFQxzl6PzKLgj29gNHjRUzs7QW0xUYXHdgY0depOmASrJX aIF/vwBTLIRuss3ixqrLOr8FxyWP4W9TyFTU8SpfycifqNjuDhCPh+iCWYfnkX52fjo+ o0wLB+Zji4z2me2q9CGOZcJz//Mze3YY5blFDKmHPdxc11Y7n3Dyv2e6AJoaA6nzqw3A omlA== X-Gm-Message-State: ACrzQf3skQWzuRGgM1Qo83gngUaYVY4ZDaJ+umsr7KVIETUCVFJzKHCM MI6WikTDGPbTwMrKhJlH7pKHOGQ= X-Google-Smtp-Source: AMsMyM4PqdVKxrHkRyCh1Awuzr+9Odp/SkteLlxHYxsGzOWIc6doSJ2hERrqtsqxHvSKi54tkhpDEYQ= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:1b89:96f1:d30:e3c]) (user=pcc job=sendgmr) by 2002:a05:6902:150b:b0:6af:2bfa:81a0 with SMTP id q11-20020a056902150b00b006af2bfa81a0mr23312787ybu.176.1663732380097; Tue, 20 Sep 2022 20:53:00 -0700 (PDT) Date: Tue, 20 Sep 2022 20:51:35 -0700 In-Reply-To: <20220921035140.57513-1-pcc@google.com> Message-Id: <20220921035140.57513-4-pcc@google.com> Mime-Version: 1.0 References: <20220921035140.57513-1-pcc@google.com> X-Mailer: git-send-email 2.37.3.968.ga6b4b080e4-goog Subject: [PATCH v4 3/8] KVM: arm64: Simplify the sanitise_mte_tags() logic From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Catalin Marinas Currently sanitise_mte_tags() checks if it's an online page before attempting to sanitise the tags. Such detection should be done in the caller via the VM_MTE_ALLOWED vma flag. Since kvm_set_spte_gfn() does not have the vma, leave the page unmapped if not already tagged. Tag initialisation will be done on a subsequent access fault in user_mem_abort(). Signed-off-by: Catalin Marinas [pcc@google.com: fix the page initializer] Signed-off-by: Peter Collingbourne Reviewed-by: Steven Price Cc: Will Deacon Cc: Marc Zyngier Cc: Peter Collingbourne --- arch/arm64/kvm/mmu.c | 40 +++++++++++++++------------------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 012ed1bc0762..5a131f009cf9 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1056,23 +1056,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) * - mmap_lock protects between a VM faulting a page in and the VMM performing * an mprotect() to add VM_MTE */ -static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, - unsigned long size) +static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, + unsigned long size) { unsigned long i, nr_pages = size >> PAGE_SHIFT; - struct page *page; + struct page *page = pfn_to_page(pfn); if (!kvm_has_mte(kvm)) - return 0; - - /* - * pfn_to_online_page() is used to reject ZONE_DEVICE pages - * that may not support tags. - */ - page = pfn_to_online_page(pfn); - - if (!page) - return -EFAULT; + return; for (i = 0; i < nr_pages; i++, page++) { if (!page_mte_tagged(page)) { @@ -1080,8 +1071,6 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, set_page_mte_tagged(page); } } - - return 0; } static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, @@ -1092,7 +1081,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool write_fault, writable, force_pte = false; bool exec_fault; bool device = false; - bool shared; unsigned long mmu_seq; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; @@ -1142,8 +1130,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, vma_shift = get_vma_page_shift(vma, hva); } - shared = (vma->vm_flags & VM_SHARED); - switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: @@ -1264,12 +1250,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if (!shared) - ret = sanitise_mte_tags(kvm, pfn, vma_pagesize); - else + if ((vma->vm_flags & VM_MTE_ALLOWED) && + !(vma->vm_flags & VM_SHARED)) { + sanitise_mte_tags(kvm, pfn, vma_pagesize); + } else { ret = -EFAULT; - if (ret) goto out_unlock; + } } if (writable) @@ -1491,15 +1478,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { kvm_pfn_t pfn = pte_pfn(range->pte); - int ret; if (!kvm->arch.mmu.pgt) return false; WARN_ON(range->end - range->start != 1); - ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE); - if (ret) + /* + * If the page isn't tagged, defer to user_mem_abort() for sanitising + * the MTE tags. The S2 pte should have been unmapped by + * mmu_notifier_invalidate_range_end(). + */ + if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn))) return false; /* -- 2.37.3.968.ga6b4b080e4-goog