From: Donald Robson <Donald.Robson@imgtec.com>
To: "corbet@lwn.net" <corbet@lwn.net>,
"jason@jlekstrand.net" <jason@jlekstrand.net>,
"willy@infradead.org" <willy@infradead.org>,
"christian.koenig@amd.com" <christian.koenig@amd.com>,
"tzimmermann@suse.de" <tzimmermann@suse.de>,
"bagasdotme@gmail.com" <bagasdotme@gmail.com>,
"mripard@kernel.org" <mripard@kernel.org>,
"matthew.brost@intel.com" <matthew.brost@intel.com>,
"bskeggs@redhat.com" <bskeggs@redhat.com>,
"dakr@redhat.com" <dakr@redhat.com>,
"ogabbay@kernel.org" <ogabbay@kernel.org>,
"boris.brezillon@collabora.com" <boris.brezillon@collabora.com>,
"Liam.Howlett@oracle.com" <Liam.Howlett@oracle.com>,
"daniel@ffwll.ch" <daniel@ffwll.ch>,
"alexdeucher@gmail.com" <alexdeucher@gmail.com>,
"airlied@gmail.com" <airlied@gmail.com>
Cc: "dri-devel@lists.freedesktop.org"
<dri-devel@lists.freedesktop.org>,
"nouveau@lists.freedesktop.org" <nouveau@lists.freedesktop.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"airlied@redhat.com" <airlied@redhat.com>
Subject: Re: [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings
Date: Wed, 21 Jun 2023 18:58:02 +0000 [thread overview]
Message-ID: <63eee0a1883669bc992ef0b75ff204f890d70cc7.camel@imgtec.com> (raw)
In-Reply-To: <20230620004217.4700-4-dakr@redhat.com>
Hi Danilo,
One comment below, but otherwise it looks great. Thanks for adding the example!
Thanks,
Donald
On Tue, 2023-06-20 at 02:42 +0200, Danilo Krummrich wrote:
>
> +/**
> + * DOC: Overview
> + *
> + * The DRM GPU VA Manager, represented by struct drm_gpuva_manager keeps track
> + * of a GPU's virtual address (VA) space and manages the corresponding virtual
> + * mappings represented by &drm_gpuva objects. It also keeps track of the
> + * mapping's backing &drm_gem_object buffers.
> + *
> + * &drm_gem_object buffers maintain a list of &drm_gpuva objects representing
> + * all existent GPU VA mappings using this &drm_gem_object as backing buffer.
> + *
> + * GPU VAs can be flagged as sparse, such that drivers may use GPU VAs to also
> + * keep track of sparse PTEs in order to support Vulkan 'Sparse Resources'.
> + *
> + * The GPU VA manager internally uses a &maple_tree to manage the
> + * &drm_gpuva mappings within a GPU's virtual address space.
> + *
> + * The &drm_gpuva_manager contains a special &drm_gpuva representing the
> + * portion of VA space reserved by the kernel. This node is initialized together
> + * with the GPU VA manager instance and removed when the GPU VA manager is
> + * destroyed.
> + *
> + * In a typical application drivers would embed struct drm_gpuva_manager and
> + * struct drm_gpuva within their own driver specific structures, there won't be
> + * any memory allocations of it's own nor memory allocations of &drm_gpuva
> + * entries.
> + *
> + * However, the &drm_gpuva_manager needs to allocate nodes for it's internal
> + * tree structures when &drm_gpuva entries are inserted. In order to support
> + * inserting &drm_gpuva entries from dma-fence signalling critical sections the
> + * &drm_gpuva_manager provides struct drm_gpuva_prealloc. Drivers may create
> + * pre-allocated nodes which drm_gpuva_prealloc_create() and subsequently insert
> + * a new &drm_gpuva entry with drm_gpuva_insert_prealloc().
I think it might be worth moving or repeating this paragraph to 'Split and Merge'
where I've added the other comment below. I think these functions are only used
to set up for drm_gpuva_sm_map(). Please ignore me if I'm wrong.
> + */
> +
> +/**
> + * DOC: Split and Merge
> + *
> + * Besides it's capability to manage and represent a GPU VA space, the
> + * &drm_gpuva_manager also provides functions to let the &drm_gpuva_manager
> + * calculate a sequence of operations to satisfy a given map or unmap request.
> + *
> + * Therefore the DRM GPU VA manager provides an algorithm implementing splitting
> + * and merging of existent GPU VA mappings with the ones that are requested to
> + * be mapped or unmapped. This feature is required by the Vulkan API to
> + * implement Vulkan 'Sparse Memory Bindings' - drivers UAPIs often refer to this
> + * as VM BIND.
> + *
> + * Drivers can call drm_gpuva_sm_map() to receive a sequence of callbacks
> + * containing map, unmap and remap operations for a given newly requested
> + * mapping. The sequence of callbacks represents the set of operations to
> + * execute in order to integrate the new mapping cleanly into the current state
> + * of the GPU VA space.
Here
> + *
> + * Depending on how the new GPU VA mapping intersects with the existent mappings
> + * of the GPU VA space the &drm_gpuva_fn_ops callbacks contain an arbitrary
> + * amount of unmap operations, a maximum of two remap operations and a single
> + * map operation. The caller might receive no callback at all if no operation is
> + * required, e.g. if the requested mapping already exists in the exact same way.
>
next prev parent reply other threads:[~2023-06-21 19:00 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-20 0:42 [PATCH drm-next v5 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Danilo Krummrich
2023-06-19 23:06 ` Danilo Krummrich
2023-06-20 4:05 ` Dave Airlie
2023-06-20 7:06 ` Oded Gabbay
2023-06-20 7:13 ` Dave Airlie
2023-06-20 7:34 ` Oded Gabbay
2023-06-20 0:42 ` [PATCH drm-next v5 01/14] drm: execution context for GEM buffers v4 Danilo Krummrich
2023-06-20 0:42 ` [PATCH drm-next v5 02/14] maple_tree: split up MA_STATE() macro Danilo Krummrich
2023-06-20 0:42 ` [PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings Danilo Krummrich
2023-06-20 3:00 ` kernel test robot
2023-06-20 3:32 ` kernel test robot
2023-06-20 4:54 ` Christoph Hellwig
2023-06-20 6:45 ` Christian König
2023-06-20 12:23 ` Danilo Krummrich
2023-06-22 13:54 ` Christian König
2023-06-22 14:22 ` Danilo Krummrich
2023-06-22 14:42 ` Christian König
2023-06-22 15:04 ` Danilo Krummrich
2023-06-22 15:07 ` Danilo Krummrich
2023-06-23 2:24 ` Matthew Brost
2023-06-23 7:16 ` Christian König
2023-06-23 13:55 ` Danilo Krummrich
2023-06-23 15:34 ` Christian König
2023-06-26 22:38 ` Dave Airlie
2023-06-21 18:58 ` Donald Robson [this message]
2023-06-20 0:42 ` [PATCH drm-next v5 04/14] drm: debugfs: provide infrastructure to dump a DRM GPU VA space Danilo Krummrich
2023-06-20 0:42 ` [PATCH drm-next v5 05/14] drm/nouveau: new VM_BIND uapi interfaces Danilo Krummrich
2023-06-20 0:42 ` [PATCH drm-next v5 06/14] drm/nouveau: get vmm via nouveau_cli_vmm() Danilo Krummrich
2023-06-20 0:42 ` [PATCH drm-next v5 07/14] drm/nouveau: bo: initialize GEM GPU VA interface Danilo Krummrich
2023-06-20 0:42 ` [PATCH drm-next v5 08/14] drm/nouveau: move usercopy helpers to nouveau_drv.h Danilo Krummrich
2023-06-20 0:42 ` [PATCH drm-next v5 09/14] drm/nouveau: fence: separate fence alloc and emit Danilo Krummrich
2023-06-21 2:26 ` kernel test robot
2023-06-20 0:42 ` [PATCH drm-next v5 10/14] drm/nouveau: fence: fail to emit when fence context is killed Danilo Krummrich
2023-06-20 0:42 ` [PATCH drm-next v5 11/14] drm/nouveau: chan: provide nouveau_channel_kill() Danilo Krummrich
2023-06-20 0:42 ` [PATCH drm-next v5 12/14] drm/nouveau: nvkm/vmm: implement raw ops to manage uvmm Danilo Krummrich
2023-06-20 0:42 ` [PATCH drm-next v5 14/14] drm/nouveau: debugfs: implement DRM GPU VA debugfs Danilo Krummrich
2023-06-20 9:25 ` [PATCH drm-next v5 00/14] [RFC] DRM GPUVA Manager & Nouveau VM_BIND UAPI Boris Brezillon
2023-06-20 12:46 ` Danilo Krummrich
2023-06-22 13:01 ` Boris Brezillon
2023-06-22 13:58 ` Danilo Krummrich
2023-06-22 15:19 ` Boris Brezillon
2023-06-22 15:27 ` Danilo Krummrich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=63eee0a1883669bc992ef0b75ff204f890d70cc7.camel@imgtec.com \
--to=donald.robson@imgtec.com \
--cc=Liam.Howlett@oracle.com \
--cc=airlied@gmail.com \
--cc=airlied@redhat.com \
--cc=alexdeucher@gmail.com \
--cc=bagasdotme@gmail.com \
--cc=boris.brezillon@collabora.com \
--cc=bskeggs@redhat.com \
--cc=christian.koenig@amd.com \
--cc=corbet@lwn.net \
--cc=dakr@redhat.com \
--cc=daniel@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=jason@jlekstrand.net \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=matthew.brost@intel.com \
--cc=mripard@kernel.org \
--cc=nouveau@lists.freedesktop.org \
--cc=ogabbay@kernel.org \
--cc=tzimmermann@suse.de \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).