From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Matthew Auld <matthew.auld@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [RFC 11/34] drm/xe: Runtime PM wake on every sysfs call
Date: Wed, 14 Feb 2024 13:48:07 -0500 [thread overview]
Message-ID: <Zc0K55byC9pAmJZ3@intel.com> (raw)
In-Reply-To: <b5c42b69-cbc1-4c1b-959f-202847456636@intel.com>
On Mon, Feb 05, 2024 at 10:55:42AM +0000, Matthew Auld wrote:
> On 26/01/2024 20:30, Rodrigo Vivi wrote:
> > Let's ensure our PCI device is awaken on every sysfs call.
> > Let's increase the runtime_pm protection and start moving
> > that to the outer bounds.
> >
> > For now, for the files with small number of attr functions,
> > let's only call the runtime pm functions directly.
> > For the hw_engines entries with many files, let's add
> > the sysfs_ops wrapper.
> >
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_device_sysfs.c | 4 ++
> > drivers/gpu/drm/xe/xe_gt_freq.c | 38 +++++++++++-
> > drivers/gpu/drm/xe/xe_gt_idle.c | 23 +++++++-
> > drivers/gpu/drm/xe/xe_gt_throttle_sysfs.c | 3 +
> > drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c | 58 ++++++++++++++++++-
> > drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.h | 7 +++
> > drivers/gpu/drm/xe/xe_tile_sysfs.c | 1 +
> > 7 files changed, 129 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_device_sysfs.c b/drivers/gpu/drm/xe/xe_device_sysfs.c
> > index 99113a5a2b84..e47c8ad1bb17 100644
> > --- a/drivers/gpu/drm/xe/xe_device_sysfs.c
> > +++ b/drivers/gpu/drm/xe/xe_device_sysfs.c
> > @@ -35,7 +35,9 @@ vram_d3cold_threshold_show(struct device *dev,
> > if (!xe)
> > return -EINVAL;
> > + xe_pm_runtime_get(xe);
>
> Does it make sense to use the get_sync/ioctl version?\
I renamed the other to _ioctl as you suggested...
>
> Assuming CI is happy,
in the end it is a get_sync because it will call resume directly.
but without any error handling and prepared to be called from
anywhere even from inner places that might be on resume path.
> Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Thanks,
perhaps we can later come and revisit the error check and
the names.
>
> > ret = sysfs_emit(buf, "%d\n", xe->d3cold.vram_threshold);
> > + xe_pm_runtime_put(xe);
> > return ret;
> > }
> > @@ -58,7 +60,9 @@ vram_d3cold_threshold_store(struct device *dev, struct device_attribute *attr,
> > drm_dbg(&xe->drm, "vram_d3cold_threshold: %u\n", vram_d3cold_threshold);
> > + xe_pm_runtime_get(xe);
> > ret = xe_pm_set_vram_threshold(xe, vram_d3cold_threshold);
> > + xe_pm_runtime_put(xe);
> > return ret ?: count;
> > }
> > diff --git a/drivers/gpu/drm/xe/xe_gt_freq.c b/drivers/gpu/drm/xe/xe_gt_freq.c
> > index e5b0f4ecdbe8..32b9a743629c 100644
> > --- a/drivers/gpu/drm/xe/xe_gt_freq.c
> > +++ b/drivers/gpu/drm/xe/xe_gt_freq.c
> > @@ -15,6 +15,7 @@
> > #include "xe_gt_sysfs.h"
> > #include "xe_gt_throttle_sysfs.h"
> > #include "xe_guc_pc.h"
> > +#include "xe_pm.h"
> > /**
> > * DOC: Xe GT Frequency Management
> > @@ -49,12 +50,23 @@ dev_to_pc(struct device *dev)
> > return &kobj_to_gt(dev->kobj.parent)->uc.guc.pc;
> > }
> > +static struct xe_device *
> > +dev_to_xe(struct device *dev)
> > +{
> > + return gt_to_xe(kobj_to_gt(dev->kobj.parent));
> > +}
> > +
> > static ssize_t act_freq_show(struct device *dev,
> > struct device_attribute *attr, char *buf)
> > {
> > struct xe_guc_pc *pc = dev_to_pc(dev);
> > + u32 freq;
> > +
> > + xe_pm_runtime_get(dev_to_xe(dev));
> > + freq = xe_guc_pc_get_act_freq(pc);
> > + xe_pm_runtime_put(dev_to_xe(dev));
> > - return sysfs_emit(buf, "%d\n", xe_guc_pc_get_act_freq(pc));
> > + return sysfs_emit(buf, "%d\n", freq);
> > }
> > static DEVICE_ATTR_RO(act_freq);
> > @@ -65,7 +77,9 @@ static ssize_t cur_freq_show(struct device *dev,
> > u32 freq;
> > ssize_t ret;
> > + xe_pm_runtime_get(dev_to_xe(dev));
> > ret = xe_guc_pc_get_cur_freq(pc, &freq);
> > + xe_pm_runtime_put(dev_to_xe(dev));
> > if (ret)
> > return ret;
> > @@ -77,8 +91,13 @@ static ssize_t rp0_freq_show(struct device *dev,
> > struct device_attribute *attr, char *buf)
> > {
> > struct xe_guc_pc *pc = dev_to_pc(dev);
> > + u32 freq;
> > +
> > + xe_pm_runtime_get(dev_to_xe(dev));
> > + freq = xe_guc_pc_get_rp0_freq(pc);
> > + xe_pm_runtime_put(dev_to_xe(dev));
> > - return sysfs_emit(buf, "%d\n", xe_guc_pc_get_rp0_freq(pc));
> > + return sysfs_emit(buf, "%d\n", freq);
> > }
> > static DEVICE_ATTR_RO(rp0_freq);
> > @@ -86,8 +105,13 @@ static ssize_t rpe_freq_show(struct device *dev,
> > struct device_attribute *attr, char *buf)
> > {
> > struct xe_guc_pc *pc = dev_to_pc(dev);
> > + u32 freq;
> > +
> > + xe_pm_runtime_get(dev_to_xe(dev));
> > + freq = xe_guc_pc_get_rpe_freq(pc);
> > + xe_pm_runtime_put(dev_to_xe(dev));
> > - return sysfs_emit(buf, "%d\n", xe_guc_pc_get_rpe_freq(pc));
> > + return sysfs_emit(buf, "%d\n", freq);
> > }
> > static DEVICE_ATTR_RO(rpe_freq);
> > @@ -107,7 +131,9 @@ static ssize_t min_freq_show(struct device *dev,
> > u32 freq;
> > ssize_t ret;
> > + xe_pm_runtime_get(dev_to_xe(dev));
> > ret = xe_guc_pc_get_min_freq(pc, &freq);
> > + xe_pm_runtime_put(dev_to_xe(dev));
> > if (ret)
> > return ret;
> > @@ -125,7 +151,9 @@ static ssize_t min_freq_store(struct device *dev, struct device_attribute *attr,
> > if (ret)
> > return ret;
> > + xe_pm_runtime_get(dev_to_xe(dev));
> > ret = xe_guc_pc_set_min_freq(pc, freq);
> > + xe_pm_runtime_put(dev_to_xe(dev));
> > if (ret)
> > return ret;
> > @@ -140,7 +168,9 @@ static ssize_t max_freq_show(struct device *dev,
> > u32 freq;
> > ssize_t ret;
> > + xe_pm_runtime_get(dev_to_xe(dev));
> > ret = xe_guc_pc_get_max_freq(pc, &freq);
> > + xe_pm_runtime_put(dev_to_xe(dev));
> > if (ret)
> > return ret;
> > @@ -158,7 +188,9 @@ static ssize_t max_freq_store(struct device *dev, struct device_attribute *attr,
> > if (ret)
> > return ret;
> > + xe_pm_runtime_get(dev_to_xe(dev));
> > ret = xe_guc_pc_set_max_freq(pc, freq);
> > + xe_pm_runtime_put(dev_to_xe(dev));
> > if (ret)
> > return ret;
> > diff --git a/drivers/gpu/drm/xe/xe_gt_idle.c b/drivers/gpu/drm/xe/xe_gt_idle.c
> > index 9358f7336889..824ff458011f 100644
> > --- a/drivers/gpu/drm/xe/xe_gt_idle.c
> > +++ b/drivers/gpu/drm/xe/xe_gt_idle.c
> > @@ -12,6 +12,7 @@
> > #include "xe_guc_pc.h"
> > #include "regs/xe_gt_regs.h"
> > #include "xe_mmio.h"
> > +#include "xe_pm.h"
> > /**
> > * DOC: Xe GT Idle
> > @@ -40,6 +41,15 @@ static struct xe_guc_pc *gtidle_to_pc(struct xe_gt_idle *gtidle)
> > return >idle_to_gt(gtidle)->uc.guc.pc;
> > }
> > +static struct xe_device *
> > +pc_to_xe(struct xe_guc_pc *pc)
> > +{
> > + struct xe_guc *guc = container_of(pc, struct xe_guc, pc);
> > + struct xe_gt *gt = container_of(guc, struct xe_gt, uc.guc);
> > +
> > + return gt_to_xe(gt);
> > +}
> > +
> > static const char *gt_idle_state_to_string(enum xe_gt_idle_state state)
> > {
> > switch (state) {
> > @@ -86,8 +96,14 @@ static ssize_t name_show(struct device *dev,
> > struct device_attribute *attr, char *buff)
> > {
> > struct xe_gt_idle *gtidle = dev_to_gtidle(dev);
> > + struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
> > + ssize_t ret;
> > +
> > + xe_pm_runtime_get(pc_to_xe(pc));
> > + ret = sysfs_emit(buff, "%s\n", gtidle->name);
> > + xe_pm_runtime_put(pc_to_xe(pc));
> > - return sysfs_emit(buff, "%s\n", gtidle->name);
> > + return ret;
> > }
> > static DEVICE_ATTR_RO(name);
> > @@ -98,7 +114,9 @@ static ssize_t idle_status_show(struct device *dev,
> > struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
> > enum xe_gt_idle_state state;
> > + xe_pm_runtime_get(pc_to_xe(pc));
> > state = gtidle->idle_status(pc);
> > + xe_pm_runtime_put(pc_to_xe(pc));
> > return sysfs_emit(buff, "%s\n", gt_idle_state_to_string(state));
> > }
> > @@ -111,7 +129,10 @@ static ssize_t idle_residency_ms_show(struct device *dev,
> > struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
> > u64 residency;
> > + xe_pm_runtime_get(pc_to_xe(pc));
> > residency = gtidle->idle_residency(pc);
> > + xe_pm_runtime_put(pc_to_xe(pc));
> > +
> > return sysfs_emit(buff, "%llu\n", get_residency_ms(gtidle, residency));
> > }
> > static DEVICE_ATTR_RO(idle_residency_ms);
> > diff --git a/drivers/gpu/drm/xe/xe_gt_throttle_sysfs.c b/drivers/gpu/drm/xe/xe_gt_throttle_sysfs.c
> > index 63d640591a52..9c33045ff1ef 100644
> > --- a/drivers/gpu/drm/xe/xe_gt_throttle_sysfs.c
> > +++ b/drivers/gpu/drm/xe/xe_gt_throttle_sysfs.c
> > @@ -11,6 +11,7 @@
> > #include "xe_gt_sysfs.h"
> > #include "xe_gt_throttle_sysfs.h"
> > #include "xe_mmio.h"
> > +#include "xe_pm.h"
> > /**
> > * DOC: Xe GT Throttle
> > @@ -38,10 +39,12 @@ static u32 read_perf_limit_reasons(struct xe_gt *gt)
> > {
> > u32 reg;
> > + xe_pm_runtime_get(gt_to_xe(gt));
> > if (xe_gt_is_media_type(gt))
> > reg = xe_mmio_read32(gt, MTL_MEDIA_PERF_LIMIT_REASONS);
> > else
> > reg = xe_mmio_read32(gt, GT0_PERF_LIMIT_REASONS);
> > + xe_pm_runtime_put(gt_to_xe(gt));
> > return reg;
> > }
> > diff --git a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
> > index 2345fb42fa39..9e23ca7f45ad 100644
> > --- a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
> > +++ b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
> > @@ -9,6 +9,7 @@
> > #include "xe_gt.h"
> > #include "xe_hw_engine_class_sysfs.h"
> > +#include "xe_pm.h"
> > #define MAX_ENGINE_CLASS_NAME_LEN 16
> > static int xe_add_hw_engine_class_defaults(struct xe_device *xe,
> > @@ -513,6 +514,7 @@ kobj_xe_hw_engine_class(struct xe_device *xe, struct kobject *parent, char *name
> > kobject_put(&keclass->base);
> > return NULL;
> > }
> > + keclass->xe = xe;
> > err = drmm_add_action_or_reset(&xe->drm, kobj_xe_hw_engine_class_fini,
> > &keclass->base);
> > @@ -567,9 +569,63 @@ static void xe_hw_engine_sysfs_kobj_release(struct kobject *kobj)
> > kfree(kobj);
> > }
> > +#include "xe_pm.h"
> > +
> > +static inline struct xe_device *pdev_to_xe_device(struct pci_dev *pdev)
> > +{
> > + return pci_get_drvdata(pdev);
> > +}
> > +
> > +static inline struct xe_device *to_xe_device(const struct drm_device *dev)
> > +{
> > + return container_of(dev, struct xe_device, drm);
> > +}
> > +
> > +static ssize_t xe_hw_engine_class_sysfs_attr_show(struct kobject *kobj,
> > + struct attribute *attr,
> > + char *buf)
> > +{
> > + struct xe_device *xe = kobj_to_xe(kobj);
> > + struct kobj_attribute *kattr;
> > + ssize_t ret = -EIO;
> > +
> > + kattr = container_of(attr, struct kobj_attribute, attr);
> > + if (kattr->show) {
> > + xe_pm_runtime_get(xe);
> > + ret = kattr->show(kobj, kattr, buf);
> > + xe_pm_runtime_put(xe);
> > + }
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t xe_hw_engine_class_sysfs_attr_store(struct kobject *kobj,
> > + struct attribute *attr,
> > + const char *buf,
> > + size_t count)
> > +{
> > + struct xe_device *xe = kobj_to_xe(kobj);
> > + struct kobj_attribute *kattr;
> > + ssize_t ret = -EIO;
> > +
> > + kattr = container_of(attr, struct kobj_attribute, attr);
> > + if (kattr->store) {
> > + xe_pm_runtime_get(xe);
> > + ret = kattr->store(kobj, kattr, buf, count);
> > + xe_pm_runtime_put(xe);
> > + }
> > +
> > + return ret;
> > +}
> > +
> > +static const struct sysfs_ops xe_hw_engine_class_sysfs_ops = {
> > + .show = xe_hw_engine_class_sysfs_attr_show,
> > + .store = xe_hw_engine_class_sysfs_attr_store,
> > +};
> > +
> > static const struct kobj_type xe_hw_engine_sysfs_kobj_type = {
> > .release = xe_hw_engine_sysfs_kobj_release,
> > - .sysfs_ops = &kobj_sysfs_ops,
> > + .sysfs_ops = &xe_hw_engine_class_sysfs_ops,
> > };
> > static void hw_engine_class_sysfs_fini(struct drm_device *drm, void *arg)
> > diff --git a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.h b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.h
> > index ec5ba673b314..28a0d7c909c0 100644
> > --- a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.h
> > +++ b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.h
> > @@ -26,6 +26,8 @@ struct kobj_eclass {
> > struct kobject base;
> > /** @eclass: A pointer to the hw engine class interface */
> > struct xe_hw_engine_class_intf *eclass;
> > + /** @xe: A pointer to the xe device */
> > + struct xe_device *xe;
> > };
> > static inline struct xe_hw_engine_class_intf *kobj_to_eclass(struct kobject *kobj)
> > @@ -33,4 +35,9 @@ static inline struct xe_hw_engine_class_intf *kobj_to_eclass(struct kobject *kob
> > return container_of(kobj, struct kobj_eclass, base)->eclass;
> > }
> > +static inline struct xe_device *kobj_to_xe(struct kobject *kobj)
> > +{
> > + return container_of(kobj, struct kobj_eclass, base)->xe;
> > +}
> > +
> > #endif
> > diff --git a/drivers/gpu/drm/xe/xe_tile_sysfs.c b/drivers/gpu/drm/xe/xe_tile_sysfs.c
> > index 0662968d7bcb..237a0761d3ad 100644
> > --- a/drivers/gpu/drm/xe/xe_tile_sysfs.c
> > +++ b/drivers/gpu/drm/xe/xe_tile_sysfs.c
> > @@ -7,6 +7,7 @@
> > #include <linux/sysfs.h>
> > #include <drm/drm_managed.h>
> > +#include "xe_pm.h"
> > #include "xe_tile.h"
> > #include "xe_tile_sysfs.h"
> > #include "xe_vram_freq.h"
next prev parent reply other threads:[~2024-02-14 18:48 UTC|newest]
Thread overview: 77+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-26 20:30 [RFC 00/34] Kill mem_access v2 Rodrigo Vivi
2024-01-26 20:30 ` [RFC 01/34] Revert "drm/xe/uc: Store firmware binary in system-memory backed BO" Rodrigo Vivi
2024-01-26 20:30 ` [RFC 02/34] drm/xe: Document Xe PM component Rodrigo Vivi
2024-01-29 10:38 ` Francois Dugast
2024-01-26 20:30 ` [RFC 03/34] drm/xe: Fix display runtime_pm handling Rodrigo Vivi
2024-02-05 9:11 ` Matthew Auld
2024-02-14 18:05 ` Rodrigo Vivi
2024-02-15 9:30 ` Matthew Auld
2024-02-15 22:19 ` Rodrigo Vivi
2024-01-26 20:30 ` [RFC 04/34] drm/xe: Create a xe_pm_runtime_resume_and_get variant for display Rodrigo Vivi
2024-01-26 20:30 ` [RFC 05/34] drm/xe: Convert xe_pm_runtime_{get, put} to void and protect from recursion Rodrigo Vivi
2024-01-26 20:30 ` [RFC 06/34] drm/xe: Prepare display for D3Cold Rodrigo Vivi
2024-01-26 20:30 ` [RFC 07/34] drm/xe: Convert mem_access assertion towards the runtime_pm state Rodrigo Vivi
2024-02-05 9:55 ` Matthew Auld
2024-02-14 18:15 ` Rodrigo Vivi
2024-01-26 20:30 ` [RFC 08/34] drm/xe: Runtime PM wake on every IOCTL Rodrigo Vivi
2024-02-05 9:39 ` Matthew Auld
2024-01-26 20:30 ` [RFC 09/34] drm/xe: Convert kunit tests from mem_access to xe_pm_runtime Rodrigo Vivi
2024-02-05 9:57 ` Matthew Auld
2024-01-26 20:30 ` [RFC 10/34] drm/xe: Convert scheduler towards direct pm_runtime Rodrigo Vivi
2024-02-05 10:46 ` Matthew Auld
2024-01-26 20:30 ` [RFC 11/34] drm/xe: Runtime PM wake on every sysfs call Rodrigo Vivi
2024-02-05 10:55 ` Matthew Auld
2024-02-14 18:48 ` Rodrigo Vivi [this message]
2024-01-26 20:30 ` [RFC 12/34] drm/xe: Ensure device is awake before removing it Rodrigo Vivi
2024-02-05 11:05 ` Matthew Auld
2024-02-14 18:51 ` Rodrigo Vivi
2024-01-26 20:30 ` [RFC 13/34] drm/xe: Remove mem_access from guc_pc calls Rodrigo Vivi
2024-02-05 11:08 ` Matthew Auld
2024-01-26 20:30 ` [RFC 14/34] drm/xe: Runtime PM wake on every debugfs call Rodrigo Vivi
2024-02-05 11:10 ` Matthew Auld
2024-02-14 18:57 ` Rodrigo Vivi
2024-01-26 20:30 ` [RFC 15/34] drm/xe: Replace dma_buf mem_access per direct xe_pm_runtime calls Rodrigo Vivi
2024-02-05 11:15 ` Matthew Auld
2024-01-26 20:30 ` [RFC 16/34] drm/xe: Removing extra mem_access protection from runtime pm Rodrigo Vivi
2024-02-05 11:23 ` Matthew Auld
2024-01-26 20:30 ` [RFC 17/34] drm/xe: Convert hwmon from mem_access to xe_pm_runtime calls Rodrigo Vivi
2024-02-05 11:25 ` Matthew Auld
2024-01-26 20:30 ` [RFC 18/34] drm/xe: Move lockdep protection from mem_access to xe_pm_runtime Rodrigo Vivi
2024-02-05 11:31 ` Matthew Auld
2024-01-26 20:30 ` [RFC 19/34] drm/xe: Remove pm_runtime lockdep Rodrigo Vivi
2024-02-05 11:54 ` Matthew Auld
2024-02-15 22:47 ` Rodrigo Vivi
2024-02-20 17:48 ` Matthew Auld
2024-02-28 16:53 ` Rodrigo Vivi
2024-01-26 20:30 ` [RFC 20/34] drm/xe: Stop checking for power_lost on D3Cold Rodrigo Vivi
2024-01-26 20:30 ` [RFC 21/34] drm/xe: Convert GuC CT paths from mem_access to xe_pm_runtime Rodrigo Vivi
2024-02-05 12:23 ` Matthew Auld
2024-02-28 16:51 ` Rodrigo Vivi
2024-01-26 20:30 ` [RFC 22/34] drm/xe: Keep D0 for the entire duration of a LR VM Rodrigo Vivi
2024-01-26 20:30 ` [RFC 23/34] drm/xe: Ensure D0 on TLB invalidation Rodrigo Vivi
2024-02-05 12:41 ` Matthew Auld
2024-01-26 20:30 ` [RFC 24/34] drm/xe: Remove useless mem_access protection for query ioctls Rodrigo Vivi
2024-02-05 12:43 ` Matthew Auld
2024-01-26 20:30 ` [RFC 25/34] drm/xe: Convert gsc_work from mem_access to xe_pm_runtime Rodrigo Vivi
2024-02-05 13:11 ` Matthew Auld
2024-01-26 20:30 ` [RFC 26/34] drm/xe: VMs don't need the mem_access protection anymore Rodrigo Vivi
2024-02-05 13:29 ` Matthew Auld
2024-02-15 22:37 ` Rodrigo Vivi
2024-01-26 20:30 ` [RFC 27/34] drm/xe: Remove useless mem_access during probe Rodrigo Vivi
2024-02-05 13:18 ` Matthew Auld
2024-01-26 20:30 ` [RFC 28/34] drm/xe: Remove mem_access from suspend and resume functions Rodrigo Vivi
2024-02-05 13:30 ` Matthew Auld
2024-01-26 20:30 ` [RFC 29/34] drm/xe: Convert gt_reset from mem_access to xe_pm_runtime Rodrigo Vivi
2024-02-05 13:33 ` Matthew Auld
2024-01-26 20:30 ` [RFC 30/34] drm/xe: Remove useless mem_access on PAT dumps Rodrigo Vivi
2024-02-05 13:34 ` Matthew Auld
2024-01-26 20:30 ` [RFC 31/34] drm/xe: Remove inner mem_access protections Rodrigo Vivi
2024-01-26 20:30 ` [RFC 32/34] drm/xe: Kill xe_device_mem_access_{get*,put} Rodrigo Vivi
2024-01-26 20:30 ` [RFC 33/34] drm/xe: Remove unused runtime pm helper Rodrigo Vivi
2024-01-26 20:30 ` [RFC 34/34] drm/xe: Enable D3Cold on 'low' VRAM utilization Rodrigo Vivi
2024-01-29 12:12 ` Matthew Auld
2024-01-29 19:01 ` Vivi, Rodrigo
2024-01-30 15:01 ` Gupta, Anshuman
2024-01-26 20:39 ` ✓ CI.Patch_applied: success for Kill mem_access v2 Patchwork
2024-01-26 20:40 ` ✗ CI.checkpatch: warning " Patchwork
2024-01-26 20:40 ` ✗ CI.KUnit: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zc0K55byC9pAmJZ3@intel.com \
--to=rodrigo.vivi@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).