From: "Christian König" <christian.koenig@amd.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Dave Airlie" <airlied@gmail.com>,
"Daniel Colascione" <dancol@dancol.org>
Cc: Tvrtko Ursulin <tursulin@ursulin.net>,
Matthew Brost <matthew.brost@intel.com>,
intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
Carlos Santa <carlos.santa@intel.com>,
Huang Rui <ray.huang@amd.com>,
Matthew Auld <matthew.auld@intel.com>,
Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
Maxime Ripard <mripard@kernel.org>,
Thomas Zimmermann <tzimmermann@suse.de>,
Simona Vetter <simona@ffwll.ch>
Subject: Re: [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order
Date: Thu, 30 Apr 2026 10:14:06 +0200 [thread overview]
Message-ID: <83bcff90-0e12-457c-9573-ed677a8cbb46@amd.com> (raw)
In-Reply-To: <1e049fe5f571e26417d4a9b4234e163e4c0d53b0.camel@linux.intel.com>
On 4/30/26 09:59, Thomas Hellström wrote:
> On Thu, 2026-04-30 at 10:11 +1000, Dave Airlie wrote:
>>>
>>> Probably stupid question: for systems like my Lunar Lake Xe2, which
>>> has
>>> unified memory and (IIUC) no special cache-type or write-mode
>>> constraints for GPU mappings, would it be possible to use regular
>>> system-provided pages (e.g. from shmem) instead of going through
>>> the TTM
>>> pool and allow mTHP to provide the aligned and contiguous backing
>>> storage that the GPU wants? Something like GEM has, but maybe
>>> inside the
>>> TTM API?
>>
>> TTM pool doesn't get used for system memory allocations in that case,
>> if you are asking for cached memory.
>
> Both Lunar Lake and Panther Lake use write-combined memory for buffer
> objects in performance-critical paths. So the pools are indeed getting
> used.
>
> And while it is possible to change caching on shmem pages if they are
> pinned/unevictable, trying to pool them quickly becomes messy.
Yeah the issue is just that this functionality is strongly x86 specific. From what I know basically every architecture came up with a distinct way of handling this.
We could move all of this behind GFP flags and into proper architecture abstraction in the core memory management.
But while the functionality was basically mandatory 30 years ago by todays standard it has only a handful of use cases left, so I'm not sure if that's really worth the effort.
On the other hand it would indeed make things *much* cleaner and interestingly at least x86 already tracks the UC/WC state in the struct page.
Christian.
>
> /Thomas
>
>
>>
>> Dave.
next prev parent reply other threads:[~2026-04-30 8:14 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-21 1:26 [PATCH 0/3] drm/ttm, drm/xe: Avoid reclaim/eviction loops under fragmentation Matthew Brost
2026-04-21 1:26 ` [PATCH 1/3] drm/ttm: Issue direct reclaim at beneficial_order Matthew Brost
2026-04-21 6:11 ` Christian König
2026-04-22 4:12 ` Matthew Brost
2026-04-22 6:41 ` Christian König
2026-04-22 7:32 ` Tvrtko Ursulin
2026-04-22 7:41 ` Christian König
2026-04-22 20:41 ` Matthew Brost
2026-04-23 8:44 ` Christian König
2026-04-28 13:45 ` Tvrtko Ursulin
2026-04-29 22:52 ` Daniel Colascione
2026-04-30 0:11 ` Dave Airlie
2026-04-30 7:59 ` Thomas Hellström
2026-04-30 8:14 ` Christian König [this message]
2026-04-30 7:34 ` Christian König
2026-04-30 3:00 ` Matthew Brost
2026-05-01 20:04 ` Thadeu Lima de Souza Cascardo
2026-04-21 1:26 ` [PATCH 2/3] drm/xe: Set TTM device beneficial_order to 9 (2M) Matthew Brost
2026-04-21 1:26 ` [PATCH 3/3] drm/xe: Avoid shrinker reclaim from kswapd under fragmentation Matthew Brost
2026-04-22 8:22 ` Thomas Hellström
2026-04-22 20:27 ` Matthew Brost
2026-04-21 5:56 ` ✓ CI.KUnit: success for drm/ttm, drm/xe: Avoid reclaim/eviction loops " Patchwork
2026-04-21 6:43 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-21 8:29 ` ✗ Xe.CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=83bcff90-0e12-457c-9573-ed677a8cbb46@amd.com \
--to=christian.koenig@amd.com \
--cc=airlied@gmail.com \
--cc=carlos.santa@intel.com \
--cc=dancol@dancol.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matthew.auld@intel.com \
--cc=matthew.brost@intel.com \
--cc=mripard@kernel.org \
--cc=ray.huang@amd.com \
--cc=simona@ffwll.ch \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tursulin@ursulin.net \
--cc=tzimmermann@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox