From: "David Hildenbrand (Arm)" <david@kernel.org>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Christian König" <christian.koenig@amd.com>,
intel-xe@lists.freedesktop.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
Lorenzo Stoakes <ljs@kernel.org>,
"Liam R. Howlett" <liam@infradead.org>,
Vlastimil Babka <vbabka@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>, Hugh Dickins <hughd@google.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>, Zi Yan <ziy@nvidia.com>,
Huang Rui <ray.huang@amd.com>,
Matthew Auld <matthew.auld@intel.com>,
Matthew Brost <matthew.brost@intel.com>,
Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
Maxime Ripard <mripard@kernel.org>,
Thomas Zimmermann <tzimmermann@suse.de>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/2] mm/shmem: add shmem_insert_folio()
Date: Wed, 13 May 2026 11:30:25 +0200 [thread overview]
Message-ID: <683399e9-eb3c-4aeb-ace3-670aa43247ed@kernel.org> (raw)
In-Reply-To: <68705cec43ab43088327b21c9d9318036fb2a255.camel@linux.intel.com>
Hi,
>>
>> Yeah but that is the requirement the HW has.
>>
>> I mean we can keep torturing the buddy allocator to give us 2M pages,
>> but essentially we want to get away from those specialized solutions
>> and has more of the functionality necessary to driver the HW in the
>> common Linux memory management code because that prevents vendors
>> from re-implementing that stuff in their specific driver over and
>> over again.
>
> For the code at hand, if we insert an order 10 folio shmem will split
> it at writeout time but spit out a warning (if enabled) at the same
> time. For this particular use-case, I think it might make sense for the
> drivers that use direct insertion to cap the page-allocator orders to
> THP size (2M).
I think this just points at the bigger problem: shmem should be allocating
folios, not someone else on shmem's behalf.
>
>>
>> Regards,
>> Christian.
>>
>>> c) You pass folio + order, which is just the red flag that you are
>>> doing
>>> something extremely dodgy.
>>>
>>> You just cast something that is not a folio, and was not
>>> allocated to be a
>>> folio to a folio through page_folio(page). That will stop
>>> working completely
>>> in the future once we decouple struct page from struct folio.
>>>
>>> If it's not a folio with a proper set order, you should be
>>> passing page +
>>> order.
>>>
>>> d) We are once more open-coding creation of a folio, by hand-
>>> crafting it
>>> ourselves.
>>>
>>> We have folio_alloc() and friends for a reason. Where we, for
>>> example, do a
>>> page_rmappable_folio().
>>>
>>> I am pretty sure that you are missing a call to
>>> page_rmappable_folio(),
>>> resulting in the large folios not getting
>>> folio_set_large_rmappable() set.
>>>
>>> e) undo_compound_page(). No words.
>>>
>>>
>>>
>>> *maybe* it would be a little less bad if you would just allocate a
>>> compound page
>>> in your driver and use page_rmappable_folio() in there.
>
> OK, yes it sounds like a prereq for this is that the driver actually
> allocates compound pages. It might be that the TTM comment about *not*
> doing that is stale, but need to check.
>
> Would it be acceptable to export a function from core mm to split an
> isolated folio?
The point is: an allocated page, including an allocated compound page, is
logically not a folio. We have work going on to decouple both concepts completely.
We do have functions to split folios. But it should be given a proper folio, not
something that can currently be cast to a folio.
>
>>>
>>> That wouldn't change a) or b), though.
>>>
>>>
>>>
>>> Good question.
>>> We'd have to keep swapoff and all of that working. For example, in
>>> try_to_unuse(), we special-case shmem_unuse() to handle non-
>>> anonymous pages.
>>>
>>> But then, the whole swapcache operates on folios ... so I am not
>>> sure if there
>>> is a lot to be won by re-implementing what shmem already does?
>>>
>
> Still that would alleviate a) and b), right? At least as long as we
> keep folio sizes within the swap cache limits?
Let's hear from Christian what would be required for DRM to use shmem natively.
Maybe there would be a possible solution to have a custom shmem-like intnal
thing that can better deal with large folios.
--
Cheers,
David
next prev parent reply other threads:[~2026-05-13 9:30 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-12 11:03 [PATCH 0/2] Insert instead of copy pages into shmem when shrinking Thomas Hellström
2026-05-12 11:03 ` [PATCH 1/2] mm/shmem: add shmem_insert_folio() Thomas Hellström
2026-05-12 11:07 ` David Hildenbrand (Arm)
2026-05-12 11:31 ` Thomas Hellström
2026-05-12 20:03 ` David Hildenbrand (Arm)
2026-05-13 7:47 ` Christian König
2026-05-13 8:31 ` Thomas Hellström
2026-05-13 9:30 ` David Hildenbrand (Arm) [this message]
2026-05-13 8:37 ` David Hildenbrand (Arm)
2026-05-13 8:51 ` Thomas Hellström
2026-05-13 10:03 ` David Hildenbrand (Arm)
2026-05-13 10:37 ` Thomas Hellström
2026-05-13 11:36 ` David Hildenbrand (Arm)
2026-05-13 14:53 ` Thomas Hellström
2026-05-13 19:35 ` David Hildenbrand (Arm)
2026-05-14 10:40 ` Thomas Hellström
2026-05-13 11:54 ` Christian König
2026-05-13 19:43 ` David Hildenbrand (Arm)
2026-05-12 11:03 ` [PATCH 2/2] drm/ttm: Use ttm_backup_insert_folio() for zero-copy swapout Thomas Hellström
2026-05-12 16:46 ` ✗ CI.checkpatch: warning for Insert instead of copy pages into shmem when shrinking Patchwork
2026-05-12 16:47 ` ✓ CI.KUnit: success " Patchwork
2026-05-12 18:11 ` ✓ Xe.CI.BAT: " Patchwork
2026-05-13 7:30 ` ✗ Xe.CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=683399e9-eb3c-4aeb-ace3-670aa43247ed@kernel.org \
--to=david@kernel.org \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jackmanb@google.com \
--cc=liam@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matthew.auld@intel.com \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=mripard@kernel.org \
--cc=ray.huang@amd.com \
--cc=rppt@kernel.org \
--cc=simona@ffwll.ch \
--cc=surenb@google.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tzimmermann@suse.de \
--cc=vbabka@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.