From: Anshuman Gupta <anshuman.gupta@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: sujaritha.sundaresan@intel.com, rodrigo.vivi@intel.com
Subject: [Intel-xe] [PATCH v6 4/5] xe/drm/pm: Toggle d3cold_allowed using vram_usages
Date: Tue, 18 Jul 2023 13:37:02 +0530 [thread overview]
Message-ID: <20230718080703.239343-5-anshuman.gupta@intel.com> (raw)
In-Reply-To: <20230718080703.239343-1-anshuman.gupta@intel.com>
Adding support to control d3cold by using vram_usages metric from
ttm resource manager.
When root port is capable of d3cold but xe has disallowed d3cold
due to vram_usages above vram_d3ccold_threshol. It is required to
disable d3cold to avoid any resume failure because root port can
still transition to d3cold when all of pcie endpoints and
{upstream, virtual} switch ports will transition to d3hot.
Also cleaning up the TODO code comment.
v2:
- Modify d3cold.allowed in xe_pm_d3cold_allowed_toggle. [Riana]
- Cond changed (total_vram_used_mb < xe->d3cold.vram_threshold)
according to doc comment.
v3:
- Added enum instead of true/false argument in
d3cold_toggle(). [Rodrigo]
- Removed TODO comment. [Rodrigo]
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com>
Reviewed-by: Badal Nilawar <badal.nilawar@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
drivers/gpu/drm/xe/xe_pci.c | 36 +++++++++++++++++++++++++++++++++---
drivers/gpu/drm/xe/xe_pm.c | 25 +++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_pm.h | 1 +
3 files changed, 59 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
index 32a676d27fa7..9b78dbe32742 100644
--- a/drivers/gpu/drm/xe/xe_pci.c
+++ b/drivers/gpu/drm/xe/xe_pci.c
@@ -26,6 +26,11 @@
#include "xe_pm.h"
#include "xe_step.h"
+enum toggle_d3cold {
+ D3COLD_DISABLE,
+ D3COLD_ENABLE,
+};
+
struct xe_subplatform_desc {
enum xe_subplatform subplatform;
const char *name;
@@ -754,6 +759,28 @@ static int xe_pci_resume(struct device *dev)
return 0;
}
+static void d3cold_toggle(struct pci_dev *pdev, enum toggle_d3cold toggle)
+{
+ struct xe_device *xe = pdev_to_xe_device(pdev);
+ struct pci_dev *root_pdev;
+
+ if (!xe->d3cold.capable)
+ return;
+
+ root_pdev = pcie_find_root_port(pdev);
+ if (!root_pdev)
+ return;
+
+ switch (toggle) {
+ case D3COLD_DISABLE:
+ pci_d3cold_disable(root_pdev);
+ break;
+ case D3COLD_ENABLE:
+ pci_d3cold_enable(root_pdev);
+ break;
+ }
+}
+
static int xe_pci_runtime_suspend(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
@@ -771,6 +798,7 @@ static int xe_pci_runtime_suspend(struct device *dev)
pci_ignore_hotplug(pdev);
pci_set_power_state(pdev, PCI_D3cold);
} else {
+ d3cold_toggle(pdev, D3COLD_DISABLE);
pci_set_power_state(pdev, PCI_D3hot);
}
@@ -795,6 +823,8 @@ static int xe_pci_runtime_resume(struct device *dev)
return err;
pci_set_master(pdev);
+ } else {
+ d3cold_toggle(pdev, D3COLD_ENABLE);
}
return xe_pm_runtime_resume(xe);
@@ -808,15 +838,15 @@ static int xe_pci_runtime_idle(struct device *dev)
if (!xe->d3cold.capable) {
xe->d3cold.allowed = false;
} else {
+ xe_pm_d3cold_allowed_toggle(xe);
+
/*
* TODO: d3cold should be allowed (true) if
* (IS_DGFX(xe) && !xe_device_mem_access_ongoing(xe))
* but maybe include some other conditions. So, before
* we can re-enable the D3cold, we need to:
* 1. rewrite the VRAM save / restore to avoid buffer object locks
- * 2. block D3cold if we have a big amount of device memory in use
- * in order to reduce the latency.
- * 3. at resume, detect if we really lost power and avoid memory
+ * 2. at resume, detect if we really lost power and avoid memory
* restoration if we were only up to d3cold
*/
xe->d3cold.allowed = false;
diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
index caf19b7f1824..5bb7d597e97d 100644
--- a/drivers/gpu/drm/xe/xe_pm.c
+++ b/drivers/gpu/drm/xe/xe_pm.c
@@ -291,3 +291,28 @@ int xe_pm_set_vram_threshold(struct xe_device *xe, u32 threshold)
return 0;
}
+
+void xe_pm_d3cold_allowed_toggle(struct xe_device *xe)
+{
+ struct ttm_resource_manager *man;
+ u32 total_vram_used_mb = 0;
+ u64 vram_used;
+ int i;
+
+ for (i = XE_PL_VRAM0; i <= XE_PL_VRAM1; ++i) {
+ man = ttm_manager_type(&xe->ttm, i);
+ if (man) {
+ vram_used = ttm_resource_manager_usage(man);
+ total_vram_used_mb += DIV_ROUND_UP_ULL(vram_used, 1024 * 1024);
+ }
+ }
+
+ mutex_lock(&xe->d3cold.lock);
+
+ if (total_vram_used_mb < xe->d3cold.vram_threshold)
+ xe->d3cold.allowed = true;
+ else
+ xe->d3cold.allowed = false;
+
+ mutex_unlock(&xe->d3cold.lock);
+}
diff --git a/drivers/gpu/drm/xe/xe_pm.h b/drivers/gpu/drm/xe/xe_pm.h
index bbd91a5855cd..ee30cf025f64 100644
--- a/drivers/gpu/drm/xe/xe_pm.h
+++ b/drivers/gpu/drm/xe/xe_pm.h
@@ -25,5 +25,6 @@ bool xe_pm_runtime_resume_if_suspended(struct xe_device *xe);
int xe_pm_runtime_get_if_active(struct xe_device *xe);
void xe_pm_assert_unbounded_bridge(struct xe_device *xe);
int xe_pm_set_vram_threshold(struct xe_device *xe, u32 threshold);
+void xe_pm_d3cold_allowed_toggle(struct xe_device *xe);
#endif
--
2.38.0
next prev parent reply other threads:[~2023-07-18 8:11 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-18 8:06 [Intel-xe] [PATCH v6 0/5] D3Cold Policy Anshuman Gupta
2023-07-18 8:06 ` [Intel-xe] [PATCH v6 1/5] drm/xe/pm: Add pci d3cold_capable support Anshuman Gupta
2023-07-18 8:07 ` [Intel-xe] [PATCH v6 2/5] drm/xe/pm: Refactor xe_pm_runtime_init Anshuman Gupta
2023-07-18 8:07 ` [Intel-xe] [PATCH v6 3/5] drm/xe/pm: Add vram_d3cold_threshold Sysfs Anshuman Gupta
2023-07-18 8:07 ` Anshuman Gupta [this message]
2023-07-18 8:07 ` [Intel-xe] [PATCH v6 5/5] drm/xe/pm: Init pcode and restore vram on power lost Anshuman Gupta
2023-07-18 8:50 ` [Intel-xe] ✓ CI.Patch_applied: success for D3Cold Policy (rev8) Patchwork
2023-07-18 8:50 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-07-18 8:51 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-07-18 8:55 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-07-18 8:56 ` [Intel-xe] ✓ CI.Hooks: " Patchwork
2023-07-18 8:57 ` [Intel-xe] ✓ CI.checksparse: " Patchwork
2023-07-18 9:31 ` [Intel-xe] ○ CI.BAT: info " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230718080703.239343-5-anshuman.gupta@intel.com \
--to=anshuman.gupta@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=rodrigo.vivi@intel.com \
--cc=sujaritha.sundaresan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox