Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: "Gupta, Anshuman" <anshuman.gupta@intel.com>
Cc: "Deak, Imre" <imre.deak@intel.com>,
	"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Subject: Re: [RFC 07/20] drm/xe: Runtime PM wake on every IOCTL
Date: Tue, 9 Jan 2024 12:57:22 -0500	[thread overview]
Message-ID: <ZZ2JArmBrwxecIa_@intel.com> (raw)
In-Reply-To: <CY5PR11MB6211DFEFAD251EE8A643922E9561A@CY5PR11MB6211.namprd11.prod.outlook.com>

On Tue, Jan 02, 2024 at 06:30:31AM -0500, Gupta, Anshuman wrote:
> 
> 
> > -----Original Message-----
> > From: Intel-xe <intel-xe-bounces@lists.freedesktop.org> On Behalf Of Rodrigo
> > Vivi
> > Sent: Thursday, December 28, 2023 7:42 AM
> > To: intel-xe@lists.freedesktop.org
> > Cc: Vivi, Rodrigo <rodrigo.vivi@intel.com>
> > Subject: [RFC 07/20] drm/xe: Runtime PM wake on every IOCTL
> > 
> > Let's ensure our PCI device is awaken on every IOCTL entry.
> > Let's increase the runtime_pm protection and start moving that to the outer
> > bounds.
> IMO we need to decouple dc9 from runtime suspend as prev patch " [RFC,05/20] drm/xe: Prepare display for D3Cold"
> added that. Let dc9 to be enable when all display are off. Otherwise blocking runtime PM on every ioctl will also block
> DC9 unnecessary.

Good catch. We need to decouple that somehow. I will take a look into
that later.

> Thanks,
> Anshuman Gupta.
> > 
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_device.c | 32
> > ++++++++++++++++++++++++++++++--
> >  drivers/gpu/drm/xe/xe_pm.c     | 15 +++++++++++++++
> >  drivers/gpu/drm/xe/xe_pm.h     |  1 +
> >  3 files changed, 46 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> > index dc3721bb37b1e..ee9b6612eec43 100644
> > --- a/drivers/gpu/drm/xe/xe_device.c
> > +++ b/drivers/gpu/drm/xe/xe_device.c
> > @@ -140,15 +140,43 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
> >  			  DRM_RENDER_ALLOW),
> >  };
> > 
> > +static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned
> > +long arg) {
> > +	struct drm_file *file_priv = file->private_data;
> > +	struct xe_device *xe = to_xe_device(file_priv->minor->dev);
> > +	long ret;
> > +
> > +	ret = xe_pm_runtime_get_sync(xe);
> > +	if (ret >= 0)
> > +		ret = drm_ioctl(file, cmd, arg);
> > +	xe_pm_runtime_put(xe);
> > +
> > +	return ret;
> > +}
> > +
> > +static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd,
> > +unsigned long arg) {
> > +	struct drm_file *file_priv = file->private_data;
> > +	struct xe_device *xe = to_xe_device(file_priv->minor->dev);
> > +	long ret;
> > +
> > +	ret = xe_pm_runtime_get_sync(xe);
> > +	if (ret >= 0)
> > +		ret = drm_compat_ioctl(file, cmd, arg);
> > +	xe_pm_runtime_put(xe);
> > +
> > +	return ret;
> > +}
> > +
> >  static const struct file_operations xe_driver_fops = {
> >  	.owner = THIS_MODULE,
> >  	.open = drm_open,
> >  	.release = drm_release_noglobal,
> > -	.unlocked_ioctl = drm_ioctl,
> > +	.unlocked_ioctl = xe_drm_ioctl,
> >  	.mmap = drm_gem_mmap,
> >  	.poll = drm_poll,
> >  	.read = drm_read,
> > -	.compat_ioctl = drm_compat_ioctl,
> > +	.compat_ioctl = xe_drm_compat_ioctl,
> >  	.llseek = noop_llseek,
> >  #ifdef CONFIG_PROC_FS
> >  	.show_fdinfo = drm_show_fdinfo,
> > diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c index
> > 45114e4e76a5a..f599707413f18 100644
> > --- a/drivers/gpu/drm/xe/xe_pm.c
> > +++ b/drivers/gpu/drm/xe/xe_pm.c
> > @@ -411,6 +411,21 @@ void xe_pm_runtime_put(struct xe_device *xe)
> >  	pm_runtime_put(xe->drm.dev);
> >  }
> > 
> > +/**
> > + * xe_pm_runtime_get_sync - Get a runtime_pm reference and resume
> > +synchronously
> > + * @xe: xe device instance
> > + *
> > + * Returns: Any number grater than or equal to 0 for success, negative
> > +error
> > + * code otherwise.
> > + */
> > +int xe_pm_runtime_get_sync(struct xe_device *xe) {
> > +        if (WARN_ON(xe_pm_read_callback_task(xe) == current))
> > +                return -ELOOP;
> > +
> > +        return pm_runtime_get_sync(xe->drm.dev); }
> > +
> >  /**
> >   * xe_pm_runtime_get_if_active - Get a runtime_pm reference if device active
> >   * @xe: xe device instance
> > diff --git a/drivers/gpu/drm/xe/xe_pm.h b/drivers/gpu/drm/xe/xe_pm.h
> > index 67a9bf3dd379b..d0e6011a80688 100644
> > --- a/drivers/gpu/drm/xe/xe_pm.h
> > +++ b/drivers/gpu/drm/xe/xe_pm.h
> > @@ -26,6 +26,7 @@ bool xe_pm_runtime_suspended(struct xe_device *xe);
> > int xe_pm_runtime_suspend(struct xe_device *xe);  int
> > xe_pm_runtime_resume(struct xe_device *xe);  void
> > xe_pm_runtime_get(struct xe_device *xe);
> > +int xe_pm_runtime_get_sync(struct xe_device *xe);
> >  void xe_pm_runtime_put(struct xe_device *xe);  int
> > xe_pm_runtime_get_if_active(struct xe_device *xe);  bool
> > xe_pm_runtime_get_if_in_use(struct xe_device *xe);
> > --
> > 2.43.0
> 

  reply	other threads:[~2024-01-09 17:57 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-28  2:12 [RFC 00/20] First attempt to kill mem_access Rodrigo Vivi
2023-12-28  2:12 ` [RFC 01/20] drm/xe: Document Xe PM component Rodrigo Vivi
2023-12-28  2:12 ` [RFC 02/20] drm/xe: Fix display runtime_pm handling Rodrigo Vivi
2023-12-28  2:12 ` [RFC 03/20] drm/xe: Create a xe_pm_runtime_resume_and_get variant for display Rodrigo Vivi
2023-12-28  2:12 ` [RFC 04/20] drm/xe: Convert xe_pm_runtime_{get, put} to void and protect from recursion Rodrigo Vivi
2023-12-28  2:12 ` [RFC 05/20] drm/xe: Prepare display for D3Cold Rodrigo Vivi
2023-12-28  2:12 ` [RFC 06/20] drm/xe: Convert mem_access assertion towards the runtime_pm state Rodrigo Vivi
2024-01-09 11:06   ` Matthew Auld
2024-01-09 17:50     ` Rodrigo Vivi
2023-12-28  2:12 ` [RFC 07/20] drm/xe: Runtime PM wake on every IOCTL Rodrigo Vivi
2024-01-02 11:30   ` Gupta, Anshuman
2024-01-09 17:57     ` Rodrigo Vivi [this message]
2023-12-28  2:12 ` [RFC 08/20] drm/xe: Runtime PM wake on every exec Rodrigo Vivi
2024-01-09 11:24   ` Matthew Auld
2024-01-09 17:41     ` Rodrigo Vivi
2024-01-09 18:40       ` Matthew Auld
2023-12-28  2:12 ` [RFC 09/20] drm/xe: Runtime PM wake on every sysfs call Rodrigo Vivi
2023-12-28  2:12 ` [RFC 10/20] drm/xe: Sort some xe_pm_runtime related functions Rodrigo Vivi
2024-01-09 11:26   ` Matthew Auld
2023-12-28  2:12 ` [RFC 11/20] drm/xe: Ensure device is awake before removing it Rodrigo Vivi
2023-12-28  2:12 ` [RFC 12/20] drm/xe: Remove mem_access from guc_pc calls Rodrigo Vivi
2023-12-28  2:12 ` [RFC 13/20] drm/xe: Runtime PM wake on every debugfs call Rodrigo Vivi
2023-12-28  2:12 ` [RFC 14/20] drm/xe: Replace dma_buf mem_access per direct xe_pm_runtime calls Rodrigo Vivi
2023-12-28  2:12 ` [RFC 15/20] drm/xe: Allow GuC CT fast path and worker regardless of runtime_pm Rodrigo Vivi
2024-01-09 12:09   ` Matthew Auld
2023-12-28  2:12 ` [RFC 16/20] drm/xe: Remove mem_access calls from migration Rodrigo Vivi
2024-01-09 12:33   ` Matthew Auld
2024-01-09 17:58     ` Rodrigo Vivi
2024-01-09 18:49       ` Matthew Auld
2024-01-09 22:40         ` Rodrigo Vivi
2024-01-11 14:17           ` Matthew Brost
2023-12-28  2:12 ` [RFC 17/20] drm/xe: Removing extra mem_access protection from runtime pm Rodrigo Vivi
2023-12-28  2:12 ` [RFC 18/20] drm/xe: Convert hwmon from mem_access to xe_pm_runtime calls Rodrigo Vivi
2023-12-28  2:12 ` [RFC 19/20] drm/xe: Remove unused runtime pm helper Rodrigo Vivi
2023-12-28  2:12 ` [RFC 20/20] drm/xe: Mega Kill of mem_access Rodrigo Vivi
2024-01-09 11:41   ` Matthew Auld
2024-01-09 17:39     ` Rodrigo Vivi
2024-01-09 18:27       ` Matthew Auld
2024-01-09 22:34         ` Rodrigo Vivi
2024-01-04  5:40 ` ✓ CI.Patch_applied: success for First attempt to kill mem_access Patchwork
2024-01-04  5:40 ` ✗ CI.checkpatch: warning " Patchwork
2024-01-04  5:41 ` ✗ CI.KUnit: failure " Patchwork
2024-01-10  5:21 ` [RFC 00/20] " Matthew Brost
2024-01-10 14:06   ` Rodrigo Vivi
2024-01-10 14:08     ` Vivi, Rodrigo
2024-01-10 14:33     ` Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZZ2JArmBrwxecIa_@intel.com \
    --to=rodrigo.vivi@intel.com \
    --cc=anshuman.gupta@intel.com \
    --cc=imre.deak@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox