From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B17B4C54E65 for ; Wed, 14 May 2025 18:34:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=kXQcDRAe4EWVXNL5yTAKMixST4qgC4DZfdrTRe+HbCQ=; b=CXIn23sP6z1ex+ga+lBu5lXeDK LK0JDHU3XbfWwDoBTf5qkRBJHhhpDh/+j2VTbHAzltXHSGhegh/CAVVTxMmqhesQxlKJEoXDN/FsS m8jERYzJSVwRZfhP3YpBtgwgtLefzCc+wq9nxGMcKNkhOFcALfktd//AOA6VNoajsGRR90WGTVN7q +/0RHlgDRZQwEEf/1IJl74Y1vR2y2ZUgdAsVrlkNaTKBDaw6/BLzYCP21sW4LeCgkhuUVlMuUXmEM A0CYIzzO6HQ7Pun0qHtopBN+qDoqBX7YweddEVCtD3mp8yjRe5sacA+SCvurcrZMffUMBbquVBzef 2ET2BPyQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uFGvL-0000000G1tP-3Y1X; Wed, 14 May 2025 18:33:47 +0000 Received: from mail-pf1-x42c.google.com ([2607:f8b0:4864:20::42c]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uFFVt-0000000Fp79-3rsD for linux-arm-kernel@lists.infradead.org; Wed, 14 May 2025 17:03:27 +0000 Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-74237a74f15so150649b3a.0 for ; Wed, 14 May 2025 10:03:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747242205; x=1747847005; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kXQcDRAe4EWVXNL5yTAKMixST4qgC4DZfdrTRe+HbCQ=; b=EuOKunID3y32e6hL66OucWsDr4VoOuz+J8e5F+Z/S+38CqwFnx0rsVpE6UfIDp24B/ NyQv0v/TVVN0M8lEsT+m1kOuCFmcBCEFtBcW20VnGdDGqgMcLjo+N3YlYmXhrKcClY8y ruVbuTJyL+IdSVGsOpcJ0/2XwKRN8mVA8lywC5ILN5GmbVBMuvT2uWix5hl5ZM4cA2t4 b84Q+sax/ZFdhyS97dQ7efxOzRYMgyIvs66WYuJOpmM5EwunN5CsCdjQh3XUkhqg513C LSJouIu05k6JLilfgeeZ2UsmoNhOONuBO2oDwyuakPsMzVSi9ogBD97jAVlOWG4F57An /dZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747242205; x=1747847005; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kXQcDRAe4EWVXNL5yTAKMixST4qgC4DZfdrTRe+HbCQ=; b=SOGby51DJ96X0HVfYcMHPoqD6FgqAxkZACVm3JvjlHPbRiEgdsUZg1BBN9ZvuS1dEx PCMh6OX+hjjVr1TujTxENV4xQaehcN7hxAUIaOf2mJKXR1on5/6kxL81ey5t9FQfUnQY Mcw3leYyAaM8pQCH6udNcdOBKJvW/NXAlV/INsWf6q7JfYYJ+FjmQy1quHaaS9PUjqmj JuHUv2pFK3bbc1z+CLKUD5NyQz/YR00FyT8HRZp8t/5camR0aZjS1eVF+DuvyT6F4HcC DlETaYBoWkJi/lqoguZil7YPLRXBsHJnPkq6i3mu5ZVudWeNhTbFE6ITWECrDe0piJcl Tysw== X-Forwarded-Encrypted: i=1; AJvYcCWFQggzogEbqjQ6w6LuKrej7jshzXqGlmJ48u1JmLdU15mK6vBR9N920StrbVCsiNmtaQXaae8ilHOpdDc9JhyN@lists.infradead.org X-Gm-Message-State: AOJu0YzPYdnV7wZoSkWYW57P0NzibWFsta7P3o+Bt2LbFOvPCip4ZTqR 6NAabV/vYKlE4MTeAKLSPXyT+1Nj1RS7uEjS7QMOVaOen+zX5ecH X-Gm-Gg: ASbGnct+oFAakyvuMFqVp065z90yq+PF6gHTPYotgBrguVWz0FPkrqLfpJCMR5aPE9r qaKUIpkiTMs6GAWtFekA8eXaAk3abTCQF8j54J6aP+7oRL/50u/4OZdVheXHwz3BzT/1jZ2cDdo DJTDB79XNINjDcYuZQuKzy0sO/1GGDBHoRzNpqHfJHYWhicpaIfWp9AJYl5fbNjzfYc5p2PFAjR ZaFnkaWz2Rp8vVIvu24roAfIaWIKe5vZDI9WrOdKQfWhgw0/4eDi6LCVpBA8F5ZSe76CYmyN446 oHRV1THxEs6ng2LB2i32KknrXVc43YSz/NHCjKIYPCRB/ZfeqPB6L6E3iRGdOQkQ88Yee0mIC89 OIPaMHE3SKi1Bh7IFPvhDDszqhA== X-Google-Smtp-Source: AGHT+IHksyKHL/cAj2IBPQcufM/sbeYjbaaEX1F8jd2jD7SBpRllEzRc2mShwfdtPSsMevJvYm3ogQ== X-Received: by 2002:a17:903:1aef:b0:220:c164:6ee1 with SMTP id d9443c01a7336-23198129cd1mr71866655ad.32.1747242204951; Wed, 14 May 2025 10:03:24 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-22fc828b445sm101230955ad.181.2025.05.14.10.03.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 May 2025 10:03:24 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Robin Murphy , Will Deacon , Joerg Roedel , Jason Gunthorpe , Kevin Tian , Nicolin Chen , Joao Martins , linux-arm-kernel@lists.infradead.org (moderated list:ARM SMMU DRIVERS), iommu@lists.linux.dev (open list:IOMMU SUBSYSTEM), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 05/40] iommu/io-pgtable-arm: Add quirk to quiet WARN_ON() Date: Wed, 14 May 2025 09:59:04 -0700 Message-ID: <20250514170118.40555-6-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250514170118.40555-1-robdclark@gmail.com> References: <20250514170118.40555-1-robdclark@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250514_100325_963105_7C827E88 X-CRM114-Status: GOOD ( 22.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Rob Clark In situations where mapping/unmapping sequence can be controlled by userspace, attempting to map over a region that has not yet been unmapped is an error. But not something that should spam dmesg. Now that there is a quirk, we can also drop the selftest_running flag, and use the quirk instead for selftests. Signed-off-by: Rob Clark Acked-by: Robin Murphy Signed-off-by: Rob Clark --- drivers/iommu/io-pgtable-arm.c | 27 ++++++++++++++------------- include/linux/io-pgtable.h | 8 ++++++++ 2 files changed, 22 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index f27965caf6a1..a535d88f8943 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -253,8 +253,6 @@ static inline bool arm_lpae_concat_mandatory(struct io_pgtable_cfg *cfg, (data->start_level == 1) && (oas == 40); } -static bool selftest_running = false; - static dma_addr_t __arm_lpae_dma_addr(void *pages) { return (dma_addr_t)virt_to_phys(pages); @@ -373,7 +371,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, for (i = 0; i < num_entries; i++) if (iopte_leaf(ptep[i], lvl, data->iop.fmt)) { /* We require an unmap first */ - WARN_ON(!selftest_running); + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); return -EEXIST; } else if (iopte_type(ptep[i]) == ARM_LPAE_PTE_TYPE_TABLE) { /* @@ -475,7 +473,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, cptep = iopte_deref(pte, data); } else if (pte) { /* We require an unmap first */ - WARN_ON(!selftest_running); + WARN_ON(!(cfg->quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); return -EEXIST; } @@ -649,8 +647,10 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); ptep += unmap_idx_start; pte = READ_ONCE(*ptep); - if (WARN_ON(!pte)) - return 0; + if (!pte) { + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); + return -ENOENT; + } /* If the size matches this level, we're in the right place */ if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { @@ -660,8 +660,10 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, /* Find and handle non-leaf entries */ for (i = 0; i < num_entries; i++) { pte = READ_ONCE(ptep[i]); - if (WARN_ON(!pte)) + if (!pte) { + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); break; + } if (!iopte_leaf(pte, lvl, iop->fmt)) { __arm_lpae_clear_pte(&ptep[i], &iop->cfg, 1); @@ -976,7 +978,8 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_ARM_TTBR1 | IO_PGTABLE_QUIRK_ARM_OUTER_WBWA | - IO_PGTABLE_QUIRK_ARM_HD)) + IO_PGTABLE_QUIRK_ARM_HD | + IO_PGTABLE_QUIRK_NO_WARN_ON)) return NULL; data = arm_lpae_alloc_pgtable(cfg); @@ -1079,7 +1082,8 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) struct arm_lpae_io_pgtable *data; typeof(&cfg->arm_lpae_s2_cfg.vtcr) vtcr = &cfg->arm_lpae_s2_cfg.vtcr; - if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_S2FWB)) + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_S2FWB | + IO_PGTABLE_QUIRK_NO_WARN_ON)) return NULL; data = arm_lpae_alloc_pgtable(cfg); @@ -1320,7 +1324,6 @@ static void __init arm_lpae_dump_ops(struct io_pgtable_ops *ops) #define __FAIL(ops, i) ({ \ WARN(1, "selftest: test failed for fmt idx %d\n", (i)); \ arm_lpae_dump_ops(ops); \ - selftest_running = false; \ -EFAULT; \ }) @@ -1336,8 +1339,6 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) size_t size, mapped; struct io_pgtable_ops *ops; - selftest_running = true; - for (i = 0; i < ARRAY_SIZE(fmts); ++i) { cfg_cookie = cfg; ops = alloc_io_pgtable_ops(fmts[i], cfg, cfg); @@ -1426,7 +1427,6 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) free_io_pgtable_ops(ops); } - selftest_running = false; return 0; } @@ -1448,6 +1448,7 @@ static int __init arm_lpae_do_selftests(void) .tlb = &dummy_tlb_ops, .coherent_walk = true, .iommu_dev = &dev, + .quirks = IO_PGTABLE_QUIRK_NO_WARN_ON, }; /* __arm_lpae_alloc_pages() merely needs dev_to_node() to work */ diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index bba2a51c87d2..639b8f4fb87d 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -88,6 +88,13 @@ struct io_pgtable_cfg { * * IO_PGTABLE_QUIRK_ARM_HD: Enables dirty tracking in stage 1 pagetable. * IO_PGTABLE_QUIRK_ARM_S2FWB: Use the FWB format for the MemAttrs bits + * + * IO_PGTABLE_QUIRK_NO_WARN_ON: Do not WARN_ON() on conflicting + * mappings, but silently return -EEXISTS. Normally an attempt + * to map over an existing mapping would indicate some sort of + * kernel bug, which would justify the WARN_ON(). But for GPU + * drivers, this could be under control of userspace. Which + * deserves an error return, but not to spam dmesg. */ #define IO_PGTABLE_QUIRK_ARM_NS BIT(0) #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1) @@ -97,6 +104,7 @@ struct io_pgtable_cfg { #define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6) #define IO_PGTABLE_QUIRK_ARM_HD BIT(7) #define IO_PGTABLE_QUIRK_ARM_S2FWB BIT(8) + #define IO_PGTABLE_QUIRK_NO_WARN_ON BIT(9) unsigned long quirks; unsigned long pgsize_bitmap; unsigned int ias; -- 2.49.0