From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: "David Hildenbrand (Arm)" <david@kernel.org>,
"Christian König" <christian.koenig@amd.com>,
intel-xe@lists.freedesktop.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
Lorenzo Stoakes <ljs@kernel.org>,
"Liam R. Howlett" <liam@infradead.org>,
Vlastimil Babka <vbabka@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>, Hugh Dickins <hughd@google.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>, Zi Yan <ziy@nvidia.com>,
Huang Rui <ray.huang@amd.com>,
Matthew Auld <matthew.auld@intel.com>,
Matthew Brost <matthew.brost@intel.com>,
Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
Maxime Ripard <mripard@kernel.org>,
Thomas Zimmermann <tzimmermann@suse.de>,
David Airlie <airlied@gmail.com>,
Simona Vetter <simona@ffwll.ch>,
dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/2] mm/shmem: add shmem_insert_folio()
Date: Wed, 13 May 2026 12:37:01 +0200 [thread overview]
Message-ID: <c0b5b94dfb4d04a1319bfb110f326afd2c29d48b.camel@linux.intel.com> (raw)
In-Reply-To: <65596f86-e6a3-48b0-aa3d-2e608964e29d@kernel.org>
On Wed, 2026-05-13 at 12:03 +0200, David Hildenbrand (Arm) wrote:
> On 5/13/26 10:51, Thomas Hellström wrote:
> > On Wed, 2026-05-13 at 10:37 +0200, David Hildenbrand (Arm) wrote:
> > > On 5/13/26 09:47, Christian König wrote:
> > > > Hi David & Thomas,
> > > >
> > > > ...
> > > >
> > > > Exactly that is one of the major reasons why we aren't using a
> > > > shmem as backing store for TTM buffers in the first place.
> > >
> > > What was the problem with that the last time this was considered?
> > >
> > > shmem nowadays supports THP (e.g., 2M) and even mTHP (e.g., 64K).
> > >
> > > For internal mounts, it must be enabled accordingly
> > > (/sys/kernel/mm/transparent_hugepage/.../shmem_enabled).
> > >
> > > Some distributions still default to "never". I guess if an admin
> > > enables it, you
> > > would just get THPs.
> >
> > FWIW, the i915 driver which uses shmem "natively" uses a special
> > mount
> > here that gives back THPs.
> >
> > >
> > > If "distro default" is the only problem, I guess we could think
> > > about
> > > how to
> > > improve that. For example, just let internal GPU DRM objects
> > > allocate
> > > any folio
> > > size available and supported etc.
> > >
> > > Would that make it possible to just use shmem natively? (e.g.,
> > > how
> > > would this
> > > interact with shmem features like folio migration, would that be
> > > workable with
> > > DRM objects?).
> >
> > Currently the drivers that use shmem in this way use
> > "mapping_set_unevictable()" as long as the object is bound to the
> > GPU.
> > Then shrinkers can unbind from GPU and revert that setting.
>
> Right, but mapping_set_unevictable() only affects folio_evictable() -
> -reclaim
> behavior. Not other properties (such as folio migration).
Interesting. Does that imply that a shmem folio can be replaced
underneath without additional measures? It looks like most DRM
call sites imply that mapping_set_unevictable() pins underlying shmem
folios
>
> >
> > The problem, (as also stated in the cover letter of this series) is
> > for
> > drivers that need to change caching of the pages to WC or UC.
>
> I assume you mean "To be able to easily maintain pools of pages
> mapped uncached
> or write-combined".
>
Exactly.
> Can you point me at the code that changes the caching of the pages?
x86 implementation is here:
https://elixir.bootlin.com/linux/v7.1-rc3/source/arch/x86/mm/pat/set_memory.c#L2556
TTM calls it here:
https://elixir.bootlin.com/linux/v7.1-rc3/source/drivers/gpu/drm/ttm/ttm_pool.c#L249
And there are actually shmem helpers that do this as well, without
pooling.
https://elixir.bootlin.com/linux/v7.1-rc3/source/drivers/gpu/drm/drm_gem_shmem_helper.c#L212
>
> > That's an
> > extremely costly operation so TTM needs to pool such allocations.
> > That's where using shmem natively becomes very ugly, because you
> > can't
> > really use a 1:1 mapping between shmem objects and DRM objects
> > anymore.
>
> So you would require different caching attributes within a DRM
> object?
The way the TTM pools work are that there are separate pools for each
allocation order and caching modes. That would essentially mean
allocations from a single shmem object would be spread out across
different pools, and we'd loose the 1:1 mapping between DRM objects and
shmem objects.
One alternative would be a single large sparse shmem object common for
all DRM objects, with a range allocator, but that also got pretty ugly
when I tried to implement that.
Finally, (and I think that might be what Christian was getting at as
well) Without CONFIG_TRANSPARENT_HUGEPAGE, we'd only see order 0 shmem
folios, right?
Thanks,
Thomas
next prev parent reply other threads:[~2026-05-13 10:37 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-12 11:03 [PATCH 0/2] Insert instead of copy pages into shmem when shrinking Thomas Hellström
2026-05-12 11:03 ` [PATCH 1/2] mm/shmem: add shmem_insert_folio() Thomas Hellström
2026-05-12 11:07 ` David Hildenbrand (Arm)
2026-05-12 11:31 ` Thomas Hellström
2026-05-12 20:03 ` David Hildenbrand (Arm)
2026-05-13 7:47 ` Christian König
2026-05-13 8:31 ` Thomas Hellström
2026-05-13 9:30 ` David Hildenbrand (Arm)
2026-05-13 8:37 ` David Hildenbrand (Arm)
2026-05-13 8:51 ` Thomas Hellström
2026-05-13 10:03 ` David Hildenbrand (Arm)
2026-05-13 10:37 ` Thomas Hellström [this message]
2026-05-13 11:36 ` David Hildenbrand (Arm)
2026-05-13 14:53 ` Thomas Hellström
2026-05-13 19:35 ` David Hildenbrand (Arm)
2026-05-14 10:40 ` Thomas Hellström
2026-05-13 11:54 ` Christian König
2026-05-13 19:43 ` David Hildenbrand (Arm)
2026-05-12 11:03 ` [PATCH 2/2] drm/ttm: Use ttm_backup_insert_folio() for zero-copy swapout Thomas Hellström
2026-05-12 16:46 ` ✗ CI.checkpatch: warning for Insert instead of copy pages into shmem when shrinking Patchwork
2026-05-12 16:47 ` ✓ CI.KUnit: success " Patchwork
2026-05-12 18:11 ` ✓ Xe.CI.BAT: " Patchwork
2026-05-13 7:30 ` ✗ Xe.CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c0b5b94dfb4d04a1319bfb110f326afd2c29d48b.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=christian.koenig@amd.com \
--cc=david@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jackmanb@google.com \
--cc=liam@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matthew.auld@intel.com \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=mripard@kernel.org \
--cc=ray.huang@amd.com \
--cc=rppt@kernel.org \
--cc=simona@ffwll.ch \
--cc=surenb@google.com \
--cc=tzimmermann@suse.de \
--cc=vbabka@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.