Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Oak Zeng <oak.zeng@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH] drm/xe/debugfs: Display gpuva info in debugfs
Date: Thu, 19 Sep 2024 19:01:54 +0000	[thread overview]
Message-ID: <Zux1IqkPO2JgwVF9@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <20240918164545.3955824-1-oak.zeng@intel.com>

On Wed, Sep 18, 2024 at 12:45:45PM -0400, Oak Zeng wrote:
> Add debugfs entry to display all the gpuva that is bound to the
> device. This is done by walking all the VMs created for each device
> process, and display va information of each VM. This is useful for
> gpuvm debugging. With this change, user can display gpuva info
> as below:
> 
> root@DUT7279PVC:/home/gta# cat /sys/kernel/debug/dri/0/gpuvas
> Process "your process name" GPU VA space
> DRM GPU VA space (Xe VM) [0x0000000000000000;0x0200000000000000]
> Kernel reserved node [0x0000000000000000;0x0000000000000000]
> 
>  VAs | start              | range              | end                | object             | object offset
> -------------------------------------------------------------------------------------------------------------
>      | 0x0000000000000000 | 0x00007ffff5db7000 | 0x00007ffff5db7000 | 0x0000000000000000 | 0x0000000000000000
>      | 0x00007ffff5db7000 | 0x0000000000001000 | 0x00007ffff5db8000 | 0x0000000000000000 | 0x00007ffff5db7000
>      | 0x00007ffff5db8000 | 0x00ff80000a248000 | 0x0100000000000000 | 0x0000000000000000 | 0x0000000000000000
> 

This looks useful. We may even want to extend wedged mode to switch the
VM to a state where this is available after a hang which causes a wedge
as this could useful. Can be done in a follow up later.

> Signed-off-by: Oak Zeng <oak.zeng@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_debugfs.c | 44 +++++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
> index 1011e5d281fa..0c7bee1c2a8d 100644
> --- a/drivers/gpu/drm/xe/xe_debugfs.c
> +++ b/drivers/gpu/drm/xe/xe_debugfs.c
> @@ -9,6 +9,7 @@
>  #include <linux/string_helpers.h>
>  
>  #include <drm/drm_debugfs.h>
> +#include <drm/drm_drv.h>
>  
>  #include "xe_bo.h"
>  #include "xe_device.h"
> @@ -84,9 +85,52 @@ static int sriov_info(struct seq_file *m, void *data)
>  	return 0;
>  }
>  
> +static int show_vm_gpuvas(struct xe_vm *vm, struct seq_file *m)
> +{
> +	int ret;
> +
> +	mutex_lock(&vm->snap_mutex);
> +	ret = drm_debugfs_gpuva_info(m, &vm->gpuvm);
> +	mutex_unlock(&vm->snap_mutex);
> +
> +	return ret;
> +}
> +
> +static int show_each_vm(struct seq_file *m, void *arg)
> +{
> +	struct drm_info_node *node = (struct drm_info_node *)m->private;
> +	struct xe_device *xe = node_to_xe(node);
> +	struct drm_device *dev = &xe->drm;
> +	struct drm_file *file;
> +	struct xe_file *xef;
> +	int (*show)(struct xe_vm *, struct seq_file *) = node->info_ent->data;
> +	struct xe_vm *vm;
> +	unsigned long idx;
> +	int ret = 0;
> +
> +	mutex_lock(&dev->filelist_mutex);
> +	list_for_each_entry(file, &dev->filelist, lhead) {
> +		xef = (struct xe_file *)file->driver_priv;
> +		seq_printf(m, "Process %s GPU VA space\n", xef->process_name);
> +		mutex_lock(&xef->vm.lock);
> +		xa_for_each(&xef->vm.xa, idx, vm) {

Kinda a nit but I've said this in a few other places - the xef->vm.lock
read usage isn't right here. This look protects the xarray lookup and
ref to VM and nothing else. It is not designed to do anything else -
e.g. holding the lock then calling another function.

With that this loop should look like this.

mutex_lock(&xef->vm.lock);
xa_for_each(&xef->vm.xa, idx, vm) {
	xe_vm_get(vm);
	mutex_unlock(&xef->vm.lock);

	/* Do something */
	
	mutex_lock(&xef->vm.lock);
	xe_vm_put(vm);
}
mutex_unlock(&xef->vm.lock);

This avoids unintentionally creating locking chains, in this case
xef->vm.lock -> snap_mutex. We should strive to only create well defined
changes of locking. Once we start unintentionally creating locking
chains it hard to unwind in a driver.

Likewise I suspect we really shouldn't be holding dev->filelist_mutex
and doing things underneath it. Fixing that would likely require
converting &dev->filelist to an xarray or writing an iterator with a
hitch though so is a bit of a larger change. Looking at the usage of
this lock, I'd say this is kinda abusing this lock too and I don't think
anyone in the community would be all that happy about its usage here.
I'd lean towards we need to properly fix this, or alternatively maybe
just maintain a per device storage of all the VMs.

Matt

> +			ret = show(vm, m);
> +			if (ret < 0)
> +				break;
> +
> +			seq_puts(m, "\n");
> +		}
> +		mutex_unlock(&xef->vm.lock);
> +	}
> +	mutex_unlock(&dev->filelist_mutex);
> +
> +	return ret;
> +}
> +
>  static const struct drm_info_list debugfs_list[] = {
>  	{"info", info, 0},
>  	{ .name = "sriov_info", .show = sriov_info, },
> +	DRM_DEBUGFS_GPUVA_INFO(show_each_vm, show_vm_gpuvas),
>  };
>  
>  static int forcewake_open(struct inode *inode, struct file *file)
> -- 
> 2.26.3
> 

  parent reply	other threads:[~2024-09-19 19:04 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-18 16:45 [PATCH] drm/xe/debugfs: Display gpuva info in debugfs Oak Zeng
2024-09-18 19:12 ` ✓ CI.Patch_applied: success for " Patchwork
2024-09-18 19:13 ` ✗ CI.checkpatch: warning " Patchwork
2024-09-18 19:14 ` ✓ CI.KUnit: success " Patchwork
2024-09-18 19:26 ` ✓ CI.Build: " Patchwork
2024-09-18 19:28 ` ✓ CI.Hooks: " Patchwork
2024-09-18 19:30 ` ✓ CI.checksparse: " Patchwork
2024-09-18 19:58 ` ✓ CI.BAT: " Patchwork
2024-09-19  7:31 ` ✗ CI.FULL: failure " Patchwork
2024-09-19 19:01 ` Matthew Brost [this message]
2024-09-19 19:04   ` [PATCH] " Matthew Brost
2024-09-20 15:57   ` Zeng, Oak
2024-09-20 18:43     ` Matthew Brost
2024-09-20 20:03       ` Zeng, Oak
2024-09-20 20:45         ` Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zux1IqkPO2JgwVF9@DUT025-TGLU.fm.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=oak.zeng@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox