From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: "Welty, Brian" <brian.welty@intel.com>
Cc: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
"Daniel Vetter" <daniel@ffwll.ch>,
"Dave Airlie" <airlied@gmail.com>,
"Christian König" <christian.koenig@amd.com>
Subject: Re: [Intel-xe] ttm_bo and multiple backing store segments
Date: Mon, 17 Jul 2023 13:24:50 -0400 [thread overview]
Message-ID: <ZLV5YmlKu1+obT8L@intel.com> (raw)
In-Reply-To: <c886cd42-2a78-fe3e-405b-e531d54449fb@intel.com>
On Thu, Jun 29, 2023 at 02:10:58PM -0700, Welty, Brian wrote:
>
> Hi Christian / Thomas,
>
> Wanted to ask if you have explored or thought about adding support in TTM
> such that a ttm_bo could have more than one underlying backing store segment
> (that is, to have a tree of ttm_resources)?
> We are considering to support such BOs for Intel Xe driver.
They are indeed the best one to give an opinion here.
I just have some dummy questions and comments below.
>
> Some of the benefits:
> * devices with page fault support can fault (and migrate) backing store
> at finer granularity than the entire BO
what advantage does this bring? to each workload?
is it a performance on huge bo?
> * BOs can support having multiple backing store segments, which can be
> in different memory domains/regions
what locking challenges would this bring?
is this more targeting gpu + cpu? or only for our multi-tile platforms?
and what's the advantage this is bringing to real use cases?
(probably the svm/hmm question below answers my questions, but...)
> * BO eviction could operate on smaller granularity than entire BO
I believe all the previous doubts apply to this item as well...
>
> Or is the thinking that workloads should use SVM/HMM instead of GEM_CREATE
> if they want above benefits?
>
> Is this something you are open to seeing an RFC series that starts perhaps
> with just extending ttm_bo_validate() to see how this might shape up?
Imho the RFC always help... a piece of code to see the idea usually draws
more attention from devs than ask in text mode. But more text explaining
the reasons behind are also helpful even with the RFC.
Thanks,
Rodrigo.
>
> -Brian
next prev parent reply other threads:[~2023-07-17 17:25 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-29 21:10 [Intel-xe] ttm_bo and multiple backing store segments Welty, Brian
2023-07-17 17:24 ` Rodrigo Vivi [this message]
2023-07-19 9:02 ` Christian König
2023-08-04 0:19 ` Welty, Brian
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZLV5YmlKu1+obT8L@intel.com \
--to=rodrigo.vivi@intel.com \
--cc=airlied@gmail.com \
--cc=brian.welty@intel.com \
--cc=christian.koenig@amd.com \
--cc=daniel@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox