Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Oak Zeng <oak.zeng@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
	<himal.prasad.ghimiray@intel.com>, <krishnaiah.bommu@intel.com>,
	<Thomas.Hellstrom@linux.intel.com>, <brian.welty@intel.com>
Subject: Re: [v2 24/31] drm/xe/svm: Create and destroy xe svm
Date: Wed, 10 Apr 2024 22:25:19 +0000	[thread overview]
Message-ID: <ZhcRz7p5/HYShFww@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <20240409201742.3042626-25-oak.zeng@intel.com>

On Tue, Apr 09, 2024 at 04:17:35PM -0400, Oak Zeng wrote:
> Introduce a data structure xe_svm to represent a shared virtual
> address space b/t CPU program and GPU program. Each process can
> only have maximumly one xe_svm instance. One xe_svm can have
> multiple gpu vm.
> 
> Introduce helper functions to create and destroy xe_svm instance.
> Once xe_svm instance is created, it is added to a global hash table
> keyed by mm_struct. Later on we can retrieve xe_svm using mm_struct.
> 

I don't think this needed at all, will explain a bit later in the series
but quite sure this can be droppped entirely.

Matt

> Signed-off-by: Oak Zeng <oak.zeng@intel.com>
> ---
>  drivers/gpu/drm/xe/Makefile |  1 +
>  drivers/gpu/drm/xe/xe_svm.c | 77 +++++++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_svm.h | 23 +++++++++++
>  3 files changed, 101 insertions(+)
>  create mode 100644 drivers/gpu/drm/xe/xe_svm.c
> 
> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
> index cd5213ba182b..f89d77b6d654 100644
> --- a/drivers/gpu/drm/xe/Makefile
> +++ b/drivers/gpu/drm/xe/Makefile
> @@ -129,6 +129,7 @@ xe-y += xe_bb.o \
>  	xe_sa.o \
>  	xe_sched_job.o \
>  	xe_step.o \
> +	xe_svm.o \
>  	xe_svm_devmem.o \
>  	xe_sync.o \
>  	xe_tile.o \
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> new file mode 100644
> index 000000000000..416cfc81c053
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -0,0 +1,77 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#include <linux/mutex.h>
> +#include <linux/mm_types.h>
> +#include <linux/kernel.h>
> +#include <linux/hashtable.h>
> +#include "xe_svm.h"
> +
> +#define XE_MAX_SVM_PROCESS 5 /* Maximumly support 32 SVM process*/
> +DEFINE_HASHTABLE(xe_svm_table, XE_MAX_SVM_PROCESS);
> +
> +/**
> + * xe_create_svm() - create a svm instance
> + *
> + * one xe_svm struct represent a shared address space
> + * between cpu and gpu program. So one xe_svm is associated
> + * to one mm_struct.
> + *
> + * If xe_svm for this process already exists, just return
> + * it; otherwise create one.
> + *
> + * Return the created xe svm struct pointer
> + */
> +struct xe_svm *xe_create_svm(void)
> +{
> +	struct mm_struct *mm = current->mm;
> +	struct xe_svm *svm;
> +
> +	svm = xe_lookup_svm_by_mm(mm);
> +	if (svm)
> +		return svm;
> +
> +	svm = kzalloc(sizeof(struct xe_svm), GFP_KERNEL);
> +	svm->mm = mm;
> +	mutex_init(&svm->mutex);
> +	INIT_LIST_HEAD(&svm->vm_list);
> +	/** Add svm to global xe_svm_table hash table
> +	 *  use mm as key so later we can retrieve svm using mm
> +	 */
> +	hash_add_rcu(xe_svm_table, &svm->hnode, (uintptr_t)mm);
> +	return svm;
> +}
> +
> +/**
> + * xe_destroy_svm() - destroy a svm process
> + *
> + * @svm: the xe_svm to destroy
> + */
> +void xe_destroy_svm(struct xe_svm *svm)
> +{
> +	BUG_ON(list_empty(&svm->vm_list));
> +	hash_del_rcu(&svm->hnode);
> +	mutex_destroy(&svm->mutex);
> +	kfree(svm);
> +}
> +
> +
> +/**
> + * xe_lookup_svm_by_mm() - retrieve xe_svm from mm struct
> + *
> + * @mm: the mm struct of the svm to retrieve
> + *
> + * Return the xe_svm struct pointer, or NULL if fail
> + */
> +struct xe_svm *xe_lookup_svm_by_mm(struct mm_struct *mm)
> +{
> +	struct xe_svm *svm;
> +
> +	hash_for_each_possible_rcu(xe_svm_table, svm, hnode, (uintptr_t)mm)
> +		if (svm->mm == mm)
> +			return svm;
> +
> +	return NULL;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
> index 92a3ee90d5a7..066740fb93f5 100644
> --- a/drivers/gpu/drm/xe/xe_svm.h
> +++ b/drivers/gpu/drm/xe/xe_svm.h
> @@ -11,6 +11,29 @@
>  #include "xe_device.h"
>  #include "xe_assert.h"
>  
> +
> +/**
> + * struct xe_svm - data structure to represent a shared
> + * virtual address space from device side. xe_svm and
> + * mm_struct has a 1:1 relationship.
> + */
> +struct xe_svm {
> +	/** @mm: The mm_struct corresponding to this xe_svm */
> +	struct mm_struct *mm;
> +	/**
> +	 * @mutex: A lock protects below vm_list
> +	 */
> +	struct mutex mutex;
> +	/** @hnode: used to add this svm to a global xe_svm_hash table*/
> +	struct hlist_node hnode;
> +	/** @vm_list: a list gpu vm in this svm space */
> +	struct list_head vm_list;
> +};
> +
> +extern struct xe_svm *xe_create_svm(void);
> +void xe_destroy_svm(struct xe_svm *svm);
> +extern struct xe_svm *xe_lookup_svm_by_mm(struct mm_struct *mm);
> +
>  /**
>   * xe_mem_region_pfn_to_dpa() - Calculate page's dpa from pfn
>   *
> -- 
> 2.26.3
> 

  reply	other threads:[~2024-04-10 22:26 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-09 20:17 [v2 00/31] Basic system allocator support in xe driver Oak Zeng
2024-04-09 20:17 ` [v2 01/31] drm/xe: Refactor vm_bind Oak Zeng
2024-04-09 20:17 ` [v2 02/31] drm/xe/svm: Add SVM document Oak Zeng
2024-04-09 20:17 ` [v2 03/31] drm/xe: Invalidate userptr VMA on page pin fault Oak Zeng
2024-04-09 20:17 ` [v2 04/31] drm/xe: Drop unused arguments from vm_bind_ioctl_ops_parse Oak Zeng
2024-04-09 20:17 ` [v2 05/31] drm/xe: Fix op->tile_mask for fault mode Oak Zeng
2024-04-09 20:17 ` [v2 06/31] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_SYSTEM_ALLOCATOR flag Oak Zeng
2024-04-09 20:17 ` [v2 07/31] drm/xe: Create userptr if page fault occurs on system_allocator VMA Oak Zeng
2024-04-09 20:17 ` [v2 08/31] drm/xe: Add faulted userptr VMA garbage collector Oak Zeng
2024-04-09 20:17 ` [v2 09/31] drm/xe: Introduce helper to populate userptr Oak Zeng
2024-04-09 20:17 ` [v2 10/31] drm/xe: Introduce a helper to free sg table Oak Zeng
2024-04-09 20:17 ` [v2 11/31] drm/xe: Use hmm_range_fault to populate user pages Oak Zeng
2024-04-09 20:17 ` [v2 12/31] drm/xe/svm: Remap and provide memmap backing for GPU vram Oak Zeng
2024-04-10 21:09   ` Matthew Brost
2024-04-16 19:01   ` Matthew Brost
2024-04-09 20:17 ` [v2 13/31] drm/xe/svm: Introduce DRM_XE_SVM kernel config Oak Zeng
2024-04-10 21:13   ` Matthew Brost
2024-06-04 18:57     ` Zeng, Oak
2024-04-09 20:17 ` [v2 14/31] drm/xe: Introduce helper to get tile from memory region Oak Zeng
2024-04-10 21:17   ` Matthew Brost
2024-04-09 20:17 ` [v2 15/31] drm/xe: Introduce a helper to get dpa from pfn Oak Zeng
2024-04-10 21:35   ` Matthew Brost
2024-04-09 20:17 ` [v2 16/31] drm/xe/svm: Get xe memory region from page Oak Zeng
2024-04-10 21:38   ` Matthew Brost
2024-04-09 20:17 ` [v2 17/31] drm/xe: Get xe_vma from xe_userptr Oak Zeng
2024-04-10 21:42   ` Matthew Brost
2024-04-09 20:17 ` [v2 18/31] drm/xe/svm: Build userptr sg table for device pages Oak Zeng
2024-04-10 21:52   ` Matthew Brost
2024-04-09 20:17 ` [v2 19/31] drm/xe/svm: Determine a vma is backed by device memory Oak Zeng
2024-04-10 21:56   ` Matthew Brost
2024-06-05  2:29     ` Zeng, Oak
2024-04-09 20:17 ` [v2 20/31] drm/xe: add xe lock document Oak Zeng
2024-04-09 20:17 ` [v2 21/31] drm/xe/svm: Introduce svm migration function Oak Zeng
2024-04-10 22:06   ` Matthew Brost
2024-04-09 20:17 ` [v2 22/31] drm/xe/svm: implement functions to allocate and free device memory Oak Zeng
2024-04-10 22:23   ` Matthew Brost
2024-04-15 20:13     ` Zeng, Oak
2024-04-15 21:19       ` Matthew Brost
2024-06-05 22:16     ` Zeng, Oak
2024-06-05 23:37       ` Matthew Brost
2024-06-06  3:30         ` Zeng, Oak
2024-06-06  4:44           ` Matthew Brost
2024-04-17 20:55   ` Matthew Brost
2024-04-09 20:17 ` [v2 23/31] drm/xe/svm: Trace buddy block allocation and free Oak Zeng
2024-04-09 20:17 ` [v2 24/31] drm/xe/svm: Create and destroy xe svm Oak Zeng
2024-04-10 22:25   ` Matthew Brost [this message]
2024-04-09 20:17 ` [v2 25/31] drm/xe/svm: Add vm to xe_svm process Oak Zeng
2024-04-09 20:17 ` [v2 26/31] drm/xe: Make function lookup_vma public Oak Zeng
2024-04-10 22:26   ` Matthew Brost
2024-04-09 20:17 ` [v2 27/31] drm/xe/svm: Handle CPU page fault Oak Zeng
2024-04-11  2:07   ` Matthew Brost
2024-04-12 17:24     ` Zeng, Oak
2024-04-12 18:10       ` Matthew Brost
2024-04-12 18:39         ` Zeng, Oak
2024-06-07  4:44         ` Zeng, Oak
2024-06-07  4:30     ` Zeng, Oak
2024-04-09 20:17 ` [v2 28/31] drm/xe/svm: Introduce helper to migrate vma to vram Oak Zeng
2024-04-11  2:49   ` Matthew Brost
2024-04-12 21:21     ` Zeng, Oak
2024-04-15 19:40       ` Matthew Brost
2024-06-07 17:12         ` Zeng, Oak
2024-06-07 17:56           ` Matthew Brost
2024-06-07 18:10             ` Matthew Brost
2024-04-09 20:17 ` [v2 29/31] drm/xe/svm: trace svm migration Oak Zeng
2024-04-09 20:17 ` [v2 30/31] drm/xe/svm: Add a helper to determine a vma is fault userptr Oak Zeng
2024-04-11  2:50   ` Matthew Brost
2024-04-09 20:17 ` [v2 31/31] drm/xe/svm: Migration from sram to vram for system allocator Oak Zeng
2024-04-11  2:55   ` Matthew Brost
2024-06-07 17:22     ` Zeng, Oak
2024-06-07 18:18       ` Matthew Brost
2024-06-07 18:23         ` Matthew Brost
2024-04-09 20:52 ` ✗ CI.Patch_applied: failure for Basic system allocator support in xe driver Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZhcRz7p5/HYShFww@DUT025-TGLU.fm.intel.com \
    --to=matthew.brost@intel.com \
    --cc=Thomas.Hellstrom@linux.intel.com \
    --cc=brian.welty@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=krishnaiah.bommu@intel.com \
    --cc=oak.zeng@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox