From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BBBFF341660 for ; Thu, 29 Jan 2026 13:49:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.101 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769694591; cv=none; b=H9dS88700SLEEJaOHb3HujDQdu/01Y/dRJq1BECzL8Gk8zoc9yhAKN8BSh7vxrIw+2YtU52Omrf97QgYtl0vVjRd4UKe1maO2qLdRegVUF84yT46hYyoQvciAY3WJvvkfe3D/rlDm6AkoF64h5qbPlOcKENyzC4Qo054RB8vJnc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769694591; c=relaxed/simple; bh=WwgZH/FqB1FRVqXMmHctXaQX3cfnkR94js620OdNLqA=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=oe7ckII3mMJ/dQsBda5W6A+BEBfxuxZ4aF7drB8qPN3unanLPFu29/MK7DqRBS8iA4jWV8O6Lnrbuuwq2/N29XvnEKrcTbt3FP9HHGob15f2xrFVZaK4o452eYtNayRneLue0lHpVZoqj4XS5demowdK7rjyj6dztP4yyoWuzDQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=d+rSCJub; arc=none smtp.client-ip=115.124.30.101 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="d+rSCJub" DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1769694584; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; bh=L2m3MJadzQ92FjJDvV6pbq0LnAGzP5Bc/Tadk16iraY=; b=d+rSCJub1Rt6Pft6gVCj0zQQPHyxApYAi0mroLSZYbrUBvIARFtm2XHmIGNIV78+8FMcd0Qqwl/rsjmVAvt6qqkXz0mZiHLEd5AfcsOAi20U46zKtZiOhkVA6wMphX40ToBHfPwocIQ7dg5QqBFKIyGg5CIlwLiYrRhuPVo1Bn8= Received: from DESKTOP-5N7EMDA(mailfrom:ying.huang@linux.alibaba.com fp:SMTPD_---0Wy7d5tv_1769694581 cluster:ay36) by smtp.aliyun-inc.com; Thu, 29 Jan 2026 21:49:42 +0800 From: "Huang, Ying" To: Jordan Niethe Cc: linux-mm@kvack.org, balbirs@nvidia.com, matthew.brost@intel.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, david@redhat.com, ziy@nvidia.com, apopple@nvidia.com, lorenzo.stoakes@oracle.com, lyude@redhat.com, dakr@kernel.org, airlied@gmail.com, simona@ffwll.ch, rcampbell@nvidia.com, mpenttil@redhat.com, jgg@nvidia.com, willy@infradead.org, linuxppc-dev@lists.ozlabs.org, intel-xe@lists.freedesktop.org, jgg@ziepe.ca, Felix.Kuehling@amd.com, jhubbard@nvidia.com Subject: Re: [PATCH v3 00/13] Remove device private pages from physical address space In-Reply-To: <20260123062309.23090-1-jniethe@nvidia.com> (Jordan Niethe's message of "Fri, 23 Jan 2026 17:22:56 +1100") References: <20260123062309.23090-1-jniethe@nvidia.com> Date: Thu, 29 Jan 2026 21:49:40 +0800 Message-ID: <875x8kbkaz.fsf@DESKTOP-5N7EMDA> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Hi, Jordan, Jordan Niethe writes: > Introduction > ------------ > > The existing design of device private memory imposes limitations which > render it non functional for certain systems and configurations - this > series removes those limitations. These issues are: > > 1) Limited available physical address space > 2) Conflicts with arch64 mm implementation > > Limited available address space > ------------------------------- > > Device private memory is implemented by first reserving a region of the > physical address space. This is a problem. The physical address space is > not a resource that is directly under the kernel's control. Availability > of suitable physical address space is constrained by the underlying > hardware and firmware and may not always be available. > > Device private memory assumes that it will be able to reserve a device > memory sized chunk of physical address space. However, there is nothing > guaranteeing that this will succeed, and there a number of factors that > increase the likelihood of failure. We need to consider what else may > exist in the physical address space. It is observed that certain VM > configurations place very large PCI windows immediately after RAM. Large > enough that there is no physical address space available at all for > device private memory. This is more likely to occur on 43 bit physical > width systems which have less physical address space. > > The fundamental issue is the physical address space is not a resource > the kernel can rely on being to allocate from at will. > > aarch64 issues > -------------- > > The current device private memory implementation has further issues on > aarch64. On aarch64, vmemmap is sized to cover the ram only. Adding > device private pages to the linear map then means that for device > private page, pfn_to_page() will read beyond the end of vmemmap region > leading to potential memory corruption. This means that device private > memory does not work reliably on aarch64 [0]. > > New implementation > ------------------ > > This series changes device private memory so that it does not require > allocation of physical address space and these problems are avoided. > Instead of using the physical address space, we introduce a "device > private address space" and allocate from there. > > A consequence of placing the device private pages outside of the > physical address space is that they no longer have a PFN. However, it is > still necessary to be able to look up a corresponding device private > page from a device private PTE entry, which means that we still require > some way to index into this device private address space. Instead of a > PFN, device private pages use an offset into this device private address > space to look up device private struct pages. > > The problem that then needs to be addressed is how to avoid confusing > these device private offsets with PFNs. It is the limited usage > of the device private pages themselves which make this possible. A > device private page is only used for userspace mappings, we do not need > to be concerned with them being used within the mm more broadly. This > means that the only way that the core kernel looks up these pages is via > the page table, where their PTE already indicates if they refer to a > device private page via their swap type, e.g. SWP_DEVICE_WRITE. We can > use this information to determine if the PTE contains a PFN which should > be looked up in the page map, or a device private offset which should be > looked up elsewhere. > > This applies when we are creating PTE entries for device private pages - > because they have their own type there are already must be handled > separately, so it is a small step to convert them to a device private > PFN now too. > > The first part of the series updates callers where device private > offsets might now be encountered to track this extra state. > > The last patch contains the bulk of the work where we change how we > convert between device private pages to device private offsets and then > use a new interface for allocating device private pages without the need > for reserving physical address space. > > By removing the device private pages from the physical address space, > this series also opens up the possibility to moving away from tracking > device private memory using struct pages in the future. This is > desirable as on systems with large amounts of memory these device > private struct pages use a signifiant amount of memory and take a > significant amount of time to initialize. Now device private pages are quite different from other pages, even in a separate address pace. IMHO, it may be better to make that as explicit as possible. For example, is it a good idea to put them in its own zone, like ZONE_DEVICE_PRIVATE? It appears not natural to put pages from different address spaces into one zone. And, this may make them easier to be distinguished from other pages. [snip] --- Best Regards, Huang, Ying