Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Auld <matthew.auld@intel.com>
To: Rodrigo Vivi <rodrigo.vivi@intel.com>, intel-xe@lists.freedesktop.org
Subject: Re: [RFC 10/20] drm/xe: Sort some xe_pm_runtime related functions
Date: Tue, 9 Jan 2024 11:26:22 +0000	[thread overview]
Message-ID: <8df5a487-54a1-48a7-90db-8520d591a583@intel.com> (raw)
In-Reply-To: <20231228021232.2366249-11-rodrigo.vivi@intel.com>

On 28/12/2023 02:12, Rodrigo Vivi wrote:
> No functional change. Just organizing the file a bit better
> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
>   drivers/gpu/drm/xe/xe_pm.c | 42 +++++++++++++++++++-------------------
>   drivers/gpu/drm/xe/xe_pm.h |  4 ++--
>   2 files changed, 23 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
> index f599707413f18..3594e707606ce 100644
> --- a/drivers/gpu/drm/xe/xe_pm.c
> +++ b/drivers/gpu/drm/xe/xe_pm.c
> @@ -387,6 +387,23 @@ int xe_pm_runtime_resume(struct xe_device *xe)
>   	return err;
>   }
>   
> +/**
> + * xe_pm_runtime_resume_and_get - Resume, then get a runtime_pm ref if awake.
> + * @xe: xe device instance
> + *
> + * Returns: True if device is awake and the the reference was taken, false otherwise.
> + */
> +bool xe_pm_runtime_resume_and_get(struct xe_device *xe)
> +{
> +	if (xe_pm_read_callback_task(xe) == current) {
> +		/* The device is awake, grab the ref and move on */
> +                pm_runtime_get_noresume(xe->drm.dev);
> +		return true;
> +	}
> +
> +        return pm_runtime_resume_and_get(xe->drm.dev) >= 0;

Nit: Formatting looks off here.

> +}
> +
>   /**
>    * xe_pm_runtime_get - Get a runtime_pm reference and resume synchronously
>    * @xe: xe device instance
> @@ -401,16 +418,6 @@ void xe_pm_runtime_get(struct xe_device *xe)
>   	pm_runtime_resume(xe->drm.dev);
>   }
>   
> -/**
> - * xe_pm_runtime_put - Put the runtime_pm reference back and mark as idle
> - * @xe: xe device instance
> - */
> -void xe_pm_runtime_put(struct xe_device *xe)
> -{
> -	pm_runtime_mark_last_busy(xe->drm.dev);
> -	pm_runtime_put(xe->drm.dev);
> -}
> -
>   /**
>    * xe_pm_runtime_get_sync - Get a runtime_pm reference and resume synchronously
>    * @xe: xe device instance
> @@ -456,20 +463,13 @@ bool xe_pm_runtime_get_if_in_use(struct xe_device *xe)
>   }
>   
>   /**
> - * xe_pm_runtime_resume_and_get - Resume, then get a runtime_pm ref if awake.
> + * xe_pm_runtime_put - Put the runtime_pm reference back and mark as idle
>    * @xe: xe device instance
> - *
> - * Returns: True if device is awake and the the reference was taken, false otherwise.
>    */
> -bool xe_pm_runtime_resume_and_get(struct xe_device *xe)
> +void xe_pm_runtime_put(struct xe_device *xe)
>   {
> -	if (xe_pm_read_callback_task(xe) == current) {
> -		/* The device is awake, grab the ref and move on */
> -                pm_runtime_get_noresume(xe->drm.dev);
> -		return true;
> -	}
> -
> -        return pm_runtime_resume_and_get(xe->drm.dev) >= 0;
> +	pm_runtime_mark_last_busy(xe->drm.dev);
> +	pm_runtime_put(xe->drm.dev);
>   }
>   
>   /**
> diff --git a/drivers/gpu/drm/xe/xe_pm.h b/drivers/gpu/drm/xe/xe_pm.h
> index d0e6011a80688..fc82a1466453b 100644
> --- a/drivers/gpu/drm/xe/xe_pm.h
> +++ b/drivers/gpu/drm/xe/xe_pm.h
> @@ -25,12 +25,12 @@ void xe_pm_runtime_fini(struct xe_device *xe);
>   bool xe_pm_runtime_suspended(struct xe_device *xe);
>   int xe_pm_runtime_suspend(struct xe_device *xe);
>   int xe_pm_runtime_resume(struct xe_device *xe);
> +bool xe_pm_runtime_resume_and_get(struct xe_device *xe);
>   void xe_pm_runtime_get(struct xe_device *xe);
>   int xe_pm_runtime_get_sync(struct xe_device *xe);
> -void xe_pm_runtime_put(struct xe_device *xe);
>   int xe_pm_runtime_get_if_active(struct xe_device *xe);
>   bool xe_pm_runtime_get_if_in_use(struct xe_device *xe);
> -bool xe_pm_runtime_resume_and_get(struct xe_device *xe);
> +void xe_pm_runtime_put(struct xe_device *xe);
>   void xe_pm_assert_unbounded_bridge(struct xe_device *xe);
>   int xe_pm_set_vram_threshold(struct xe_device *xe, u32 threshold);
>   void xe_pm_d3cold_allowed_toggle(struct xe_device *xe);

  reply	other threads:[~2024-01-09 11:26 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-28  2:12 [RFC 00/20] First attempt to kill mem_access Rodrigo Vivi
2023-12-28  2:12 ` [RFC 01/20] drm/xe: Document Xe PM component Rodrigo Vivi
2023-12-28  2:12 ` [RFC 02/20] drm/xe: Fix display runtime_pm handling Rodrigo Vivi
2023-12-28  2:12 ` [RFC 03/20] drm/xe: Create a xe_pm_runtime_resume_and_get variant for display Rodrigo Vivi
2023-12-28  2:12 ` [RFC 04/20] drm/xe: Convert xe_pm_runtime_{get, put} to void and protect from recursion Rodrigo Vivi
2023-12-28  2:12 ` [RFC 05/20] drm/xe: Prepare display for D3Cold Rodrigo Vivi
2023-12-28  2:12 ` [RFC 06/20] drm/xe: Convert mem_access assertion towards the runtime_pm state Rodrigo Vivi
2024-01-09 11:06   ` Matthew Auld
2024-01-09 17:50     ` Rodrigo Vivi
2023-12-28  2:12 ` [RFC 07/20] drm/xe: Runtime PM wake on every IOCTL Rodrigo Vivi
2024-01-02 11:30   ` Gupta, Anshuman
2024-01-09 17:57     ` Rodrigo Vivi
2023-12-28  2:12 ` [RFC 08/20] drm/xe: Runtime PM wake on every exec Rodrigo Vivi
2024-01-09 11:24   ` Matthew Auld
2024-01-09 17:41     ` Rodrigo Vivi
2024-01-09 18:40       ` Matthew Auld
2023-12-28  2:12 ` [RFC 09/20] drm/xe: Runtime PM wake on every sysfs call Rodrigo Vivi
2023-12-28  2:12 ` [RFC 10/20] drm/xe: Sort some xe_pm_runtime related functions Rodrigo Vivi
2024-01-09 11:26   ` Matthew Auld [this message]
2023-12-28  2:12 ` [RFC 11/20] drm/xe: Ensure device is awake before removing it Rodrigo Vivi
2023-12-28  2:12 ` [RFC 12/20] drm/xe: Remove mem_access from guc_pc calls Rodrigo Vivi
2023-12-28  2:12 ` [RFC 13/20] drm/xe: Runtime PM wake on every debugfs call Rodrigo Vivi
2023-12-28  2:12 ` [RFC 14/20] drm/xe: Replace dma_buf mem_access per direct xe_pm_runtime calls Rodrigo Vivi
2023-12-28  2:12 ` [RFC 15/20] drm/xe: Allow GuC CT fast path and worker regardless of runtime_pm Rodrigo Vivi
2024-01-09 12:09   ` Matthew Auld
2023-12-28  2:12 ` [RFC 16/20] drm/xe: Remove mem_access calls from migration Rodrigo Vivi
2024-01-09 12:33   ` Matthew Auld
2024-01-09 17:58     ` Rodrigo Vivi
2024-01-09 18:49       ` Matthew Auld
2024-01-09 22:40         ` Rodrigo Vivi
2024-01-11 14:17           ` Matthew Brost
2023-12-28  2:12 ` [RFC 17/20] drm/xe: Removing extra mem_access protection from runtime pm Rodrigo Vivi
2023-12-28  2:12 ` [RFC 18/20] drm/xe: Convert hwmon from mem_access to xe_pm_runtime calls Rodrigo Vivi
2023-12-28  2:12 ` [RFC 19/20] drm/xe: Remove unused runtime pm helper Rodrigo Vivi
2023-12-28  2:12 ` [RFC 20/20] drm/xe: Mega Kill of mem_access Rodrigo Vivi
2024-01-09 11:41   ` Matthew Auld
2024-01-09 17:39     ` Rodrigo Vivi
2024-01-09 18:27       ` Matthew Auld
2024-01-09 22:34         ` Rodrigo Vivi
2024-01-04  5:40 ` ✓ CI.Patch_applied: success for First attempt to kill mem_access Patchwork
2024-01-04  5:40 ` ✗ CI.checkpatch: warning " Patchwork
2024-01-04  5:41 ` ✗ CI.KUnit: failure " Patchwork
2024-01-10  5:21 ` [RFC 00/20] " Matthew Brost
2024-01-10 14:06   ` Rodrigo Vivi
2024-01-10 14:08     ` Vivi, Rodrigo
2024-01-10 14:33     ` Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8df5a487-54a1-48a7-90db-8520d591a583@intel.com \
    --to=matthew.auld@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=rodrigo.vivi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox