From: "Ville Syrjälä" <ville.syrjala@linux.intel.com>
To: "Gupta, Anshuman" <anshuman.gupta@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
"linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
"rafael@kernel.org" <rafael@kernel.org>,
"lenb@kernel.org" <lenb@kernel.org>,
"bhelgaas@google.com" <bhelgaas@google.com>,
"ilpo.jarvinen@linux.intel.com" <ilpo.jarvinen@linux.intel.com>,
"De Marchi, Lucas" <lucas.demarchi@intel.com>,
"Vivi, Rodrigo" <rodrigo.vivi@intel.com>,
"Nilawar, Badal" <badal.nilawar@intel.com>,
"Nasim, Kam" <kam.nasim@intel.com>
Subject: Re: [RFC 5/6] drm/xe/pm: D3Cold target state
Date: Tue, 25 Feb 2025 20:44:49 +0200 [thread overview]
Message-ID: <Z74PoePChen4Bn8F@intel.com> (raw)
In-Reply-To: <CY5PR11MB62113ABBF2CDB9F621B1A92595C32@CY5PR11MB6211.namprd11.prod.outlook.com>
On Tue, Feb 25, 2025 at 06:00:04PM +0000, Gupta, Anshuman wrote:
>
>
> > -----Original Message-----
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > Sent: Tuesday, February 25, 2025 11:20 PM
> > To: Gupta, Anshuman <anshuman.gupta@intel.com>
> > Cc: intel-xe@lists.freedesktop.org; linux-acpi@vger.kernel.org; linux-
> > pci@vger.kernel.org; rafael@kernel.org; lenb@kernel.org;
> > bhelgaas@google.com; ilpo.jarvinen@linux.intel.com; De Marchi, Lucas
> > <lucas.demarchi@intel.com>; Vivi, Rodrigo <rodrigo.vivi@intel.com>; Nilawar,
> > Badal <badal.nilawar@intel.com>; Nasim, Kam <kam.nasim@intel.com>
> > Subject: Re: [RFC 5/6] drm/xe/pm: D3Cold target state
> >
> > On Mon, Feb 24, 2025 at 10:18:48PM +0530, Anshuman Gupta wrote:
> > > Trade-off D3Cold target state based upon current vram usages.
> > > if vram usages is greater than vram_d3cold_threshold and GPU has
> > > display connected
> >
> > Why would anyone care about displays being connected or not?
> As per specs we got to enable vrsr only when there is display connected,
What specs? And why do they say connected displays should be
a factor here?
I think the only thing that makes any sense for this kind of stuff
would be sysfs power/ knobs that allow the system administrator to
tune the behaviour for their specific use case. And if no such knobs
exist currently then perhaps they should be added.
> We can check that in probe but a drm connector status can change after completion of probe. That is the reason to put a check for display connected in idle callback.
> Thanks,
> Anshuman
> >
> > > then target D3Cold state is D3Cold-VRSR otherwise target state is
> > > D3COLD-Off.
> > >
> > > Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/display/xe_display.c | 22 ++++++++++++++++++++++
> > > drivers/gpu/drm/xe/display/xe_display.h | 1 +
> > > drivers/gpu/drm/xe/xe_pm.c | 12 ++++++++++++
> > > 3 files changed, 35 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/xe/display/xe_display.c
> > > b/drivers/gpu/drm/xe/display/xe_display.c
> > > index 02a413a07382..140a43d6b1b6 100644
> > > --- a/drivers/gpu/drm/xe/display/xe_display.c
> > > +++ b/drivers/gpu/drm/xe/display/xe_display.c
> > > @@ -548,3 +548,25 @@ int xe_display_probe(struct xe_device *xe)
> > > unset_display_features(xe);
> > > return 0;
> > > }
> > > +
> > > +bool xe_display_connected(struct xe_device *xe) {
> > > + struct drm_connector *list_connector;
> > > + struct drm_connector_list_iter iter;
> > > + bool ret = false;
> > > +
> > > + mutex_lock(&xe->drm.mode_config.mutex);
> > > + drm_connector_list_iter_begin(&xe->drm, &iter);
> > > +
> > > + drm_for_each_connector_iter(list_connector, &iter) {
> > > + if (list_connector->status == connector_status_connected) {
> > > + ret = true;
> > > + break;
> > > + }
> > > + }
> > > +
> > > + drm_connector_list_iter_end(&iter);
> > > + mutex_unlock(&xe->drm.mode_config.mutex);
> > > +
> > > + return ret;
> > > +}
> > > diff --git a/drivers/gpu/drm/xe/display/xe_display.h
> > > b/drivers/gpu/drm/xe/display/xe_display.h
> > > index 685dc74402fb..c6bc54323084 100644
> > > --- a/drivers/gpu/drm/xe/display/xe_display.h
> > > +++ b/drivers/gpu/drm/xe/display/xe_display.h
> > > @@ -40,6 +40,7 @@ void xe_display_pm_resume(struct xe_device *xe);
> > > void xe_display_pm_runtime_suspend(struct xe_device *xe); void
> > > xe_display_pm_runtime_suspend_late(struct xe_device *xe); void
> > > xe_display_pm_runtime_resume(struct xe_device *xe);
> > > +bool xe_display_connected(struct xe_device *xe);
> > >
> > > #else
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
> > > index 81e67b5693dc..6d28aedcb062 100644
> > > --- a/drivers/gpu/drm/xe/xe_pm.c
> > > +++ b/drivers/gpu/drm/xe/xe_pm.c
> > > @@ -198,6 +198,14 @@ static void xe_rpm_lockmap_release(const struct
> > xe_device *xe)
> > > &xe_pm_runtime_d3cold_map);
> > > }
> > >
> > > +static void xe_pm_suspend_prepare(struct xe_device *xe) {
> > > + if (pm_suspend_target_state == PM_SUSPEND_TO_IDLE)
> > > + xe_pm_d3cold_allowed_toggle(xe);
> > > + else
> > > + xe->d3cold.allowed = XE_D3COLD_OFF; }
> > > +
> > > /**
> > > * xe_pm_suspend - Helper for System suspend, i.e. S0->S3 / S0->S2idle
> > > * @xe: xe device instance
> > > @@ -213,6 +221,8 @@ int xe_pm_suspend(struct xe_device *xe)
> > > drm_dbg(&xe->drm, "Suspending device\n");
> > > trace_xe_pm_suspend(xe, __builtin_return_address(0));
> > >
> > > + xe_pm_suspend_prepare(xe);
> > > +
> > > err = xe_pxp_pm_suspend(xe->pxp);
> > > if (err)
> > > goto err;
> > > @@ -875,6 +885,8 @@ void xe_pm_d3cold_allowed_toggle(struct
> > xe_device
> > > *xe)
> > >
> > > if (total_vram_used_mb < xe->d3cold.vram_threshold)
> > > xe->d3cold.allowed = XE_D3COLD_OFF;
> > > + else if (xe->d3cold.vrsr_capable && xe_display_connected(xe))
> > > + xe->d3cold.allowed = XE_D3COLD_VRSR;
> > > else
> > > xe->d3cold.allowed = XE_D3HOT;
> > >
> > > --
> > > 2.34.1
> >
> > --
> > Ville Syrjälä
> > Intel
--
Ville Syrjälä
Intel
next prev parent reply other threads:[~2025-02-25 18:44 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-24 16:48 [RFC 0/6] VRAM Self Refresh Anshuman Gupta
2025-02-24 16:48 ` [RFC 1/6] PCI/ACPI: Implement PCI FW _DSM method Anshuman Gupta
2025-02-24 19:40 ` Bjorn Helgaas
2025-02-25 18:25 ` Gupta, Anshuman
2025-02-25 20:30 ` Bjorn Helgaas
2025-02-24 16:48 ` [RFC 2/6] drm/xe/vrsr: Detect vrsr capability Anshuman Gupta
2025-03-07 21:50 ` Rodrigo Vivi
2025-02-24 16:48 ` [RFC 3/6] drm/xe/vrsr: Apis to init and enable VRSR feature Anshuman Gupta
2025-02-24 19:43 ` Bjorn Helgaas
2025-03-10 17:23 ` Rodrigo Vivi
2025-02-24 16:48 ` [RFC 4/6] drm/xe/vrsr: Refactor d3cold.allowed to a enum Anshuman Gupta
2025-03-10 17:28 ` Rodrigo Vivi
2025-04-01 5:24 ` Poosa, Karthik
2025-02-24 16:48 ` [RFC 5/6] drm/xe/pm: D3Cold target state Anshuman Gupta
2025-02-24 19:45 ` Bjorn Helgaas
2025-02-25 17:49 ` Ville Syrjälä
2025-02-25 18:00 ` Gupta, Anshuman
2025-02-25 18:44 ` Ville Syrjälä [this message]
2025-02-24 16:48 ` [RFC 6/6] drm/xe/vrsr: Enable VRSR Anshuman Gupta
2025-04-01 5:19 ` [RFC,6/6] " Poosa, Karthik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z74PoePChen4Bn8F@intel.com \
--to=ville.syrjala@linux.intel.com \
--cc=anshuman.gupta@intel.com \
--cc=badal.nilawar@intel.com \
--cc=bhelgaas@google.com \
--cc=ilpo.jarvinen@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=kam.nasim@intel.com \
--cc=lenb@kernel.org \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lucas.demarchi@intel.com \
--cc=rafael@kernel.org \
--cc=rodrigo.vivi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox