From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D4C14C54756 for ; Mon, 19 May 2025 19:55:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=kXQcDRAe4EWVXNL5yTAKMixST4qgC4DZfdrTRe+HbCQ=; b=Lw1z5BTpvqejw09WtGuQynb4oZ JVN08jS9U7n4xIpCVbs4ZlxYIrN1EXQAq8toOpt71WMioPq6xbpE/eRCiBJJdPt+fAQNEEhuth7I5 yzYCq7Eyq9tJpKHsPss4fh/MbgY0Kpkc921tjxOvkSaeFeelEmG0xBMAe3Rk9X43vojQixX0sBra/ DkPURx9WM11RetHmh7VklZgEnTmNnQzTQkuMBxudHZ0m3EUHzeBRbgxrJlRaulIDf7xFcnVca8R6Y DTbzuIQ9lVT99Hb7sdeIh6EEQ033sEdHU82qeNT0EOFpkl7abDmyMDVW+h7bUbDAp3qMIYQFlC+0l w/BVF5Lg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uH6aC-0000000AHn3-2fwQ; Mon, 19 May 2025 19:55:32 +0000 Received: from mail-pl1-x629.google.com ([2607:f8b0:4864:20::629]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uH4gz-0000000A2J6-2TqK for linux-arm-kernel@lists.infradead.org; Mon, 19 May 2025 17:54:26 +0000 Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-231e21d3b63so41048595ad.3 for ; Mon, 19 May 2025 10:54:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677265; x=1748282065; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kXQcDRAe4EWVXNL5yTAKMixST4qgC4DZfdrTRe+HbCQ=; b=ZiHASSNb4RatpmgxmNJE9+8si6Q/oMWBcUbgbK/scuFX9xKjM3Kz+bj6K7KRXEMTGC W6fTHD5u0E5TXkHFZnUWRXrs9Vzr554EZhAyAoTMz3TxRVGCTN2CcdTMMXdZHwx+LKff pR4x0dWM+tbb7AV8eLL2aHw9VCmxfcgchK/Kl9efPtB3egcFcdrt1VR0AZSZxhCcf4H7 9I+N7FbD19KF/KIKa0DRzBCdjoE1iRuRS+mmVbDQPxUjHj7MONnsJ8jk4KnTyiDF7H1m MX5dnUcWbBxpm5yZooTG6cAnnOiQGoJRqssc/TJRHo3w4AuWDcAT97Dxu1J7JieZVf4B TDLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677265; x=1748282065; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kXQcDRAe4EWVXNL5yTAKMixST4qgC4DZfdrTRe+HbCQ=; b=a/+dX1Z3IpX1AH6rijHRN08W/hkw9u9bq9kXuCQK1EqluBX6Nmn4V5dCiJtMm7u1n/ y7c5MWDhiazduG6rL5jXKXb4ejeB7VIR2qFEHNrheRQipOdtzhB74ixH0DqTho33Qo9F aXayk4G2EBiFpCbJrGwn/a81vDpkRrAs9vTLBzI0jKT3feNEeGUFb4W/YzseCIBDF6aF KN55xl2lFM99AnY3G4+JpvQ89YCyMrRChIUldmF/TysuZKqq7N2XoLZLb0bmm7Ws6KHn ebGirf7N6thP+87TA4v2BBEiJKiW2KT7/o9Lznk8v9LZADEaIfsdCdDCyIllcFAOyJQ+ rz0A== X-Forwarded-Encrypted: i=1; AJvYcCUiB4DY0Zdvqn96m+JeDJaWAQX5WUJKiPSZuKmYwADxOqJqaY1ENkd/LDURcR64dUUVDEwnCdakrx5j6gfCi+A3@lists.infradead.org X-Gm-Message-State: AOJu0YxVgNrqvMIKa+ryWd0eDCTW4WLZwm40Q2E1pjB/zncsJWnGYE59 /S4VAh61UXGUuirQLoU3He3mfLr41lIXt8t8NukIRrnSuWxSUGS5iNTB X-Gm-Gg: ASbGnctJti9V2S8XuPnRKX4ZrkjtaSlG69m+NUPF/ZqPTlZoHtH0MaOaoWD9w4onxlx Kqe4h6HTFXenISN5IQD6hTS37S8uOKwE5LHE+RpxgDOZsmXaqmnVSrDAZHSaVlJ8rzOEAJ8E3nB c/DCe3/rAQfsvr/AfJLUwGrOxnkbiOxDD2GJfa0CvP529PObubAT35j229einSmRxaC80YoDagi +nAl1ZRKAofRWy2x2ImBQg/gTOrIzvVk8/zekELh+UcAhT/6XWskX1UR8vWccJHZTo2t74CratQ Ia3XcG3pl9vDx2DYjPi3h3PlTwrmM0wR+Lbte+kKT85Qj7Eveq9avuygufbV2CIdsELP03o/Emp YaG2rxB9B7xU5X++ktVKfM4bI7g== X-Google-Smtp-Source: AGHT+IH9o+jDU8SCOf6p2dYQ3xG9pg+wpXFHI4AQ16YbWVrsnqHFvIYpaqIH85X0Xd/j1VubslNPjQ== X-Received: by 2002:a17:902:ea12:b0:231:c6d0:f784 with SMTP id d9443c01a7336-231de37623amr210055535ad.28.1747677264644; Mon, 19 May 2025 10:54:24 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4e9605csm62543565ad.123.2025.05.19.10.54.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:24 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Robin Murphy , Will Deacon , Joerg Roedel , Jason Gunthorpe , Nicolin Chen , Kevin Tian , Joao Martins , linux-arm-kernel@lists.infradead.org (moderated list:ARM SMMU DRIVERS), iommu@lists.linux.dev (open list:IOMMU SUBSYSTEM), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 05/40] iommu/io-pgtable-arm: Add quirk to quiet WARN_ON() Date: Mon, 19 May 2025 10:51:28 -0700 Message-ID: <20250519175348.11924-6-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250519_105425_634381_6F47CF8D X-CRM114-Status: GOOD ( 22.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Rob Clark In situations where mapping/unmapping sequence can be controlled by userspace, attempting to map over a region that has not yet been unmapped is an error. But not something that should spam dmesg. Now that there is a quirk, we can also drop the selftest_running flag, and use the quirk instead for selftests. Signed-off-by: Rob Clark Acked-by: Robin Murphy Signed-off-by: Rob Clark --- drivers/iommu/io-pgtable-arm.c | 27 ++++++++++++++------------- include/linux/io-pgtable.h | 8 ++++++++ 2 files changed, 22 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index f27965caf6a1..a535d88f8943 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -253,8 +253,6 @@ static inline bool arm_lpae_concat_mandatory(struct io_pgtable_cfg *cfg, (data->start_level == 1) && (oas == 40); } -static bool selftest_running = false; - static dma_addr_t __arm_lpae_dma_addr(void *pages) { return (dma_addr_t)virt_to_phys(pages); @@ -373,7 +371,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, for (i = 0; i < num_entries; i++) if (iopte_leaf(ptep[i], lvl, data->iop.fmt)) { /* We require an unmap first */ - WARN_ON(!selftest_running); + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); return -EEXIST; } else if (iopte_type(ptep[i]) == ARM_LPAE_PTE_TYPE_TABLE) { /* @@ -475,7 +473,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, cptep = iopte_deref(pte, data); } else if (pte) { /* We require an unmap first */ - WARN_ON(!selftest_running); + WARN_ON(!(cfg->quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); return -EEXIST; } @@ -649,8 +647,10 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); ptep += unmap_idx_start; pte = READ_ONCE(*ptep); - if (WARN_ON(!pte)) - return 0; + if (!pte) { + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); + return -ENOENT; + } /* If the size matches this level, we're in the right place */ if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { @@ -660,8 +660,10 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, /* Find and handle non-leaf entries */ for (i = 0; i < num_entries; i++) { pte = READ_ONCE(ptep[i]); - if (WARN_ON(!pte)) + if (!pte) { + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); break; + } if (!iopte_leaf(pte, lvl, iop->fmt)) { __arm_lpae_clear_pte(&ptep[i], &iop->cfg, 1); @@ -976,7 +978,8 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_ARM_TTBR1 | IO_PGTABLE_QUIRK_ARM_OUTER_WBWA | - IO_PGTABLE_QUIRK_ARM_HD)) + IO_PGTABLE_QUIRK_ARM_HD | + IO_PGTABLE_QUIRK_NO_WARN_ON)) return NULL; data = arm_lpae_alloc_pgtable(cfg); @@ -1079,7 +1082,8 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) struct arm_lpae_io_pgtable *data; typeof(&cfg->arm_lpae_s2_cfg.vtcr) vtcr = &cfg->arm_lpae_s2_cfg.vtcr; - if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_S2FWB)) + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_S2FWB | + IO_PGTABLE_QUIRK_NO_WARN_ON)) return NULL; data = arm_lpae_alloc_pgtable(cfg); @@ -1320,7 +1324,6 @@ static void __init arm_lpae_dump_ops(struct io_pgtable_ops *ops) #define __FAIL(ops, i) ({ \ WARN(1, "selftest: test failed for fmt idx %d\n", (i)); \ arm_lpae_dump_ops(ops); \ - selftest_running = false; \ -EFAULT; \ }) @@ -1336,8 +1339,6 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) size_t size, mapped; struct io_pgtable_ops *ops; - selftest_running = true; - for (i = 0; i < ARRAY_SIZE(fmts); ++i) { cfg_cookie = cfg; ops = alloc_io_pgtable_ops(fmts[i], cfg, cfg); @@ -1426,7 +1427,6 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) free_io_pgtable_ops(ops); } - selftest_running = false; return 0; } @@ -1448,6 +1448,7 @@ static int __init arm_lpae_do_selftests(void) .tlb = &dummy_tlb_ops, .coherent_walk = true, .iommu_dev = &dev, + .quirks = IO_PGTABLE_QUIRK_NO_WARN_ON, }; /* __arm_lpae_alloc_pages() merely needs dev_to_node() to work */ diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index bba2a51c87d2..639b8f4fb87d 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -88,6 +88,13 @@ struct io_pgtable_cfg { * * IO_PGTABLE_QUIRK_ARM_HD: Enables dirty tracking in stage 1 pagetable. * IO_PGTABLE_QUIRK_ARM_S2FWB: Use the FWB format for the MemAttrs bits + * + * IO_PGTABLE_QUIRK_NO_WARN_ON: Do not WARN_ON() on conflicting + * mappings, but silently return -EEXISTS. Normally an attempt + * to map over an existing mapping would indicate some sort of + * kernel bug, which would justify the WARN_ON(). But for GPU + * drivers, this could be under control of userspace. Which + * deserves an error return, but not to spam dmesg. */ #define IO_PGTABLE_QUIRK_ARM_NS BIT(0) #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1) @@ -97,6 +104,7 @@ struct io_pgtable_cfg { #define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6) #define IO_PGTABLE_QUIRK_ARM_HD BIT(7) #define IO_PGTABLE_QUIRK_ARM_S2FWB BIT(8) + #define IO_PGTABLE_QUIRK_NO_WARN_ON BIT(9) unsigned long quirks; unsigned long pgsize_bitmap; unsigned int ias; -- 2.49.0