From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5CDF230EF7C for ; Wed, 25 Feb 2026 20:11:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772050296; cv=none; b=IK27yTtljZw18Zu1ug68m8xIxuXV0n4mBoeokQMUYwMnsNUyAZGr6tEevxpVWBNcb7VLcoBDkLqu0HK0GERkJ0njvZxW1WvoMd5tkahVDKTcrIDFN4TWDYBnyK6chB4IKggf7zgoVZXq6biveOFObOR1YH+5xrE00kYC6luzRLo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772050296; c=relaxed/simple; bh=em9/pmP4Zil+1qgnj07w3eGPsU3yjMUlhqOHzAdRYoA=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=GsqEsHeSFA9Ey+yw8J1flpOdXvSb77Q3qxPnquKr4D3a+Nkw03pXKu2JA/yaoKgvS2f/Zj9PX3zjbOhN18KDm/M0PQYA2rET67CPpyg+qN+tTMOAjIUpkV6HMP2QNouR1Ofnh5n/S597SAdrSHP1VY66CbP2siQNtMq0ZwGlzBI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IOEmtEjX; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IOEmtEjX" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-2adb1c1f9d4so18795ad.0 for ; Wed, 25 Feb 2026 12:11:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1772050295; x=1772655095; darn=vger.kernel.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=2MWFdKV7J70WVUzusLdwzbROOrIDq84oPID02+MRug4=; b=IOEmtEjXxo1fcjtLjF9gH9F7jB+s9z6ZCG6Z+7R/UPMgAbvmp53xt4f/8QEDWPF5cx dAi6HqsSlAU0Al9foKR9WPF/a6jZqemN8l8I7fH9P7YWcMnlesXsUeQ/szvD5eiNlclz yZ0tP2uOYHwEYTXKFZeE+0NytKJYIrp3iCS3njj6D7uSGc5KUK0+t2Ma2sElVbMwfuo2 mFOMazljw4XfeujAqFQCMqTbs+YEeY5tVYSqgmUslnkRk040+FG6WXcmTZj1PsyeXzP5 WogTDN1nlRwiOJ3BIVzvBuD/0b2tgTMsQDZJjjj2czT7XwUc1U5DnO9CYtuH1bnC/VyU dA+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1772050295; x=1772655095; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2MWFdKV7J70WVUzusLdwzbROOrIDq84oPID02+MRug4=; b=uK5VDKwbnlKFvNEpbMqBqbKmjp4xO6AF19mMMw6fcHje8fS05208V4lLODR07DRSKz ND5p86t0+gXJZTqAdE5M0P0fl8/QF7uY3XWyu1dB+2IbaK1zmUT6OlXBkWLzDRwC5Rs2 b9/tgVZyqksExMH/VIspnEwguoDcbWGSkI5/bOdvHN+DNL0eubaKPa1zA2q3j6GAT/l8 IANcLZDyTP2htB8Efi1LKJ3HK1KFKig/tj89bufUKbxGT5i7Q42TUmsdrvn6X7vMK+fm bA36pGbIwpleRDZR08Wm6B66/CLZxxjx2h1CiIH0SLKwUC2dDWzSN5PFBsezFqg8kkcf tBLw== X-Forwarded-Encrypted: i=1; AJvYcCUtKrawnwwKGFe51RfPpGrMe/xtpj5atXRGtqjewb1k5sBDIS2L9sH70d53ddD0xzG649IIsYfBPQUMuC8=@vger.kernel.org X-Gm-Message-State: AOJu0Yx2qsMcDM8Iilli1q94l6kcNWBuqCEiUNKKQOj80++ZOVmfusqd 79KPyTElqI9eNlzeMC3pmtAjKTbrGwl5qJ1CjTDaAfl/QGfi5m6ajcBtQKxBoVR/og== X-Gm-Gg: ATEYQzyoJgFgA4UHIbn8A7jkoL6GBAVT+KuTQGAUDpiBNzqQNBGDJjPfrgDM9cUvquv SMMRSi9p/i7c5sqUqOKVRG8rvzx8Q7ItFxHZAG3t3AcXf2pQTQXe2/qwEg68bqrWRO8u9jfby+O r72J7/dZNWUrD/TTtitw/qjjb5iUvhE1O5+Xr9Lr4tAWI+TPKgx9EbWEqQ1z5AmmVPizYblJpMr mxh1fRKKFzYUFw7PdYx8esML1SNw4vuQ1EdqEOvnk4MQIzHxSMyDU325JEsKuUQq7mayi4x8sN/ mMKjybPziGHaKPlC7iI8NmZ/fDbYaNAA8ZpZC6N/i3xnqH/eQS6t43l3DdD9jkK0G9SUj1rOQlC W+ArhUJcx4ufRDqNp6iucBesd1wp4r+RndttCyM2S94DGt9x2+Q7lrsagw1hwuijatuqJNdyDdA rl9zLmeQ94PIgmnEKGDPELyyd20yutZzkSct1Wp1yeMbssIZIlix3vOF+/szsb X-Received: by 2002:a17:902:b688:b0:2a3:ccfa:c41f with SMTP id d9443c01a7336-2adf783d909mr135475ad.1.1772050294261; Wed, 25 Feb 2026 12:11:34 -0800 (PST) Received: from google.com (222.245.187.35.bc.googleusercontent.com. [35.187.245.222]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3590203d3edsm3860787a91.9.2026.02.25.12.11.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Feb 2026 12:11:33 -0800 (PST) Date: Wed, 25 Feb 2026 20:11:29 +0000 From: Pranjal Shrivastava To: Leon Romanovsky Cc: Ashish Mhetre , robin.murphy@arm.com, joro@8bytes.org, will@kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-tegra@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state Message-ID: References: <20260224104257.1641429-1-amhetre@nvidia.com> <20260224123221.GM10607@unreal> <9d01b4e3-be5b-4c9c-8088-1d10f67f1fd8@nvidia.com> <20260225075609.GB9541@unreal> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20260225075609.GB9541@unreal> On Wed, Feb 25, 2026 at 09:56:09AM +0200, Leon Romanovsky wrote: > On Wed, Feb 25, 2026 at 10:19:41AM +0530, Ashish Mhetre wrote: > > > > > > On 2/25/2026 2:27 AM, Pranjal Shrivastava wrote: > > > External email: Use caution opening links or attachments > > > > > > > > > On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote: > > > > On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote: > > > > > When mapping scatter-gather entries that reference reserved > > > > > memory regions without struct page backing (e.g., bootloader created > > > > > carveouts), is_pci_p2pdma_page() dereferences the page pointer > > > > > returned by sg_page() without first verifying its validity. > > > > I believe this behavior started after commit 88df6ab2f34b > > > > ("mm: add folio_is_pci_p2pdma()"). Prior to that change, the > > > > is_zone_device_page(page) check would return false when given a > > > > non‑existent page pointer. > > > > > > > > Thanks Leon for the review. This crash started after commit 30280eee2db1 > > ("iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg"). > > > > > Doesn't folio_is_pci_p2pdma() also check for zone device? > > > I see[1] that it does: > > > > > > static inline bool folio_is_pci_p2pdma(const struct folio *folio) > > > { > > > return IS_ENABLED(CONFIG_PCI_P2PDMA) && > > > folio_is_zone_device(folio) && > > > folio->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA; > > > } > > > > > > I believe the problem arises due to the page_folio() call in > > > folio_is_pci_p2pdma(page_folio(page)); within is_pci_p2pdma_page(). > > > page_folio() assumes it has a valid struct page to work with. For these > > > carveouts, that isn't true. > > > > > > Potentially something like the following would stop the crash: > > > > > > diff --git a/include/linux/memremap.h b/include/linux/memremap.h > > > index e3c2ccf872a8..e47876021afa 100644 > > > --- a/include/linux/memremap.h > > > +++ b/include/linux/memremap.h > > > @@ -197,7 +197,8 @@ static inline void folio_set_zone_device_data(struct folio *folio, void *data) > > > > > > static inline bool is_pci_p2pdma_page(const struct page *page) > > > { > > > - return IS_ENABLED(CONFIG_PCI_P2PDMA) && > > > + return IS_ENABLED(CONFIG_PCI_P2PDMA) && page && > > > + pfn_valid(page_to_pfn(page)) && > > > folio_is_pci_p2pdma(page_folio(page)); > > > } > > > > > > > Yes, this will also fix the crash. > > > > > But my broader question is: why are we calling a page-based API like > > > is_pci_p2pdma_page() on non-struct-page memory in the first place? > > > Could we instead add a helper to verify if the sg_page() return value > > > is actually backed by a struct page? If it isn't, we should arguably > > > skip the P2PDMA logic entirely and fall back to a dma_map_phys style > > > path. Isn't handling these "pageless" physical ranges the primary reason > > > dma_map_phys exists? > > > > Thanks for the feedback, Pranjal. > > > > To clarify: are you suggesting we handle non-page-backed mappings inside > > iommu_dma_map_sg (within dma-iommu), or that callers should detect > > non-page-backed memory and use dma_map_phys instead of dma_map_sg? > > The latter one. > Yup, I meant the latter. > > Former approach sounds better so that existing iommu_dma_map_sg callers > > don't need changes, but I'd like to confirm your preference. > > The bug is in callers which used wrong API, they need to be adapted. Yes, the thing is, if the caller already knows that the region to be mapped is NOT struct page-backed, then why does it use dma_map_sg variants? Thanks Praan