* [Intel-gfx] [PATCH 0/1] Add support for querying engine cycles
@ 2021-04-29 0:34 Umesh Nerlige Ramappa
2021-04-29 0:34 ` [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy Umesh Nerlige Ramappa
` (3 more replies)
0 siblings, 4 replies; 19+ messages in thread
From: Umesh Nerlige Ramappa @ 2021-04-29 0:34 UTC (permalink / raw)
To: intel-gfx; +Cc: dri-devel
This is just a refresh of the earlier patch along with cover letter for the IGT
testing. The query provides the engine cs cycles counter.
v2: Use GRAPHICS_VER() instead of IG_GEN()
v3: Add R-b to the patch
v4: Split cpu timestamp array into timestamp and delta for cleaner API
Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Test-with: 20210429002959.69473-1-umesh.nerlige.ramappa@intel.com
Umesh Nerlige Ramappa (1):
i915/query: Correlate engine and cpu timestamps with better accuracy
drivers/gpu/drm/i915/i915_query.c | 148 ++++++++++++++++++++++++++++++
include/uapi/drm/i915_drm.h | 52 +++++++++++
2 files changed, 200 insertions(+)
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 19+ messages in thread* [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy 2021-04-29 0:34 [Intel-gfx] [PATCH 0/1] Add support for querying engine cycles Umesh Nerlige Ramappa @ 2021-04-29 0:34 ` Umesh Nerlige Ramappa 2021-04-29 8:34 ` Lionel Landwerlin 2021-04-29 19:07 ` Jason Ekstrand 2021-04-29 1:34 ` [Intel-gfx] ✗ Fi.CI.DOCS: warning for Add support for querying engine cycles Patchwork ` (2 subsequent siblings) 3 siblings, 2 replies; 19+ messages in thread From: Umesh Nerlige Ramappa @ 2021-04-29 0:34 UTC (permalink / raw) To: intel-gfx; +Cc: dri-devel Perf measurements rely on CPU and engine timestamps to correlate events of interest across these time domains. Current mechanisms get these timestamps separately and the calculated delta between these timestamps lack enough accuracy. To improve the accuracy of these time measurements to within a few us, add a query that returns the engine and cpu timestamps captured as close to each other as possible. v2: (Tvrtko) - document clock reference used - return cpu timestamp always - capture cpu time just before lower dword of cs timestamp v3: (Chris) - use uncore-rpm - use __query_cs_timestamp helper v4: (Lionel) - Kernel perf subsytem allows users to specify the clock id to be used in perf_event_open. This clock id is used by the perf subsystem to return the appropriate cpu timestamp in perf events. Similarly, let the user pass the clockid to this query so that cpu timestamp corresponds to the clock id requested. v5: (Tvrtko) - Use normal ktime accessors instead of fast versions - Add more uApi documentation v6: (Lionel) - Move switch out of spinlock v7: (Chris) - cs_timestamp is a misnomer, use cs_cycles instead - return the cs cycle frequency as well in the query v8: - Add platform and engine specific checks v9: (Lionel) - Return 2 cpu timestamps in the query - captured before and after the register read v10: (Chris) - Use local_clock() to measure time taken to read lower dword of register and return it to user. v11: (Jani) - IS_GEN deprecated. User GRAPHICS_VER instead. v12: (Jason) - Split cpu timestamp array into timestamp and delta for cleaner API Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> --- drivers/gpu/drm/i915/i915_query.c | 148 ++++++++++++++++++++++++++++++ include/uapi/drm/i915_drm.h | 52 +++++++++++ 2 files changed, 200 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_query.c b/drivers/gpu/drm/i915/i915_query.c index fed337ad7b68..357c44e8177c 100644 --- a/drivers/gpu/drm/i915/i915_query.c +++ b/drivers/gpu/drm/i915/i915_query.c @@ -6,6 +6,8 @@ #include <linux/nospec.h> +#include "gt/intel_engine_pm.h" +#include "gt/intel_engine_user.h" #include "i915_drv.h" #include "i915_perf.h" #include "i915_query.h" @@ -90,6 +92,151 @@ static int query_topology_info(struct drm_i915_private *dev_priv, return total_length; } +typedef u64 (*__ktime_func_t)(void); +static __ktime_func_t __clock_id_to_func(clockid_t clk_id) +{ + /* + * Use logic same as the perf subsystem to allow user to select the + * reference clock id to be used for timestamps. + */ + switch (clk_id) { + case CLOCK_MONOTONIC: + return &ktime_get_ns; + case CLOCK_MONOTONIC_RAW: + return &ktime_get_raw_ns; + case CLOCK_REALTIME: + return &ktime_get_real_ns; + case CLOCK_BOOTTIME: + return &ktime_get_boottime_ns; + case CLOCK_TAI: + return &ktime_get_clocktai_ns; + default: + return NULL; + } +} + +static inline int +__read_timestamps(struct intel_uncore *uncore, + i915_reg_t lower_reg, + i915_reg_t upper_reg, + u64 *cs_ts, + u64 *cpu_ts, + u64 *cpu_delta, + __ktime_func_t cpu_clock) +{ + u32 upper, lower, old_upper, loop = 0; + + upper = intel_uncore_read_fw(uncore, upper_reg); + do { + *cpu_delta = local_clock(); + *cpu_ts = cpu_clock(); + lower = intel_uncore_read_fw(uncore, lower_reg); + *cpu_delta = local_clock() - *cpu_delta; + old_upper = upper; + upper = intel_uncore_read_fw(uncore, upper_reg); + } while (upper != old_upper && loop++ < 2); + + *cs_ts = (u64)upper << 32 | lower; + + return 0; +} + +static int +__query_cs_cycles(struct intel_engine_cs *engine, + u64 *cs_ts, u64 *cpu_ts, u64 *cpu_delta, + __ktime_func_t cpu_clock) +{ + struct intel_uncore *uncore = engine->uncore; + enum forcewake_domains fw_domains; + u32 base = engine->mmio_base; + intel_wakeref_t wakeref; + int ret; + + fw_domains = intel_uncore_forcewake_for_reg(uncore, + RING_TIMESTAMP(base), + FW_REG_READ); + + with_intel_runtime_pm(uncore->rpm, wakeref) { + spin_lock_irq(&uncore->lock); + intel_uncore_forcewake_get__locked(uncore, fw_domains); + + ret = __read_timestamps(uncore, + RING_TIMESTAMP(base), + RING_TIMESTAMP_UDW(base), + cs_ts, + cpu_ts, + cpu_delta, + cpu_clock); + + intel_uncore_forcewake_put__locked(uncore, fw_domains); + spin_unlock_irq(&uncore->lock); + } + + return ret; +} + +static int +query_cs_cycles(struct drm_i915_private *i915, + struct drm_i915_query_item *query_item) +{ + struct drm_i915_query_cs_cycles __user *query_ptr; + struct drm_i915_query_cs_cycles query; + struct intel_engine_cs *engine; + __ktime_func_t cpu_clock; + int ret; + + if (GRAPHICS_VER(i915) < 6) + return -ENODEV; + + query_ptr = u64_to_user_ptr(query_item->data_ptr); + ret = copy_query_item(&query, sizeof(query), sizeof(query), query_item); + if (ret != 0) + return ret; + + if (query.flags) + return -EINVAL; + + if (query.rsvd) + return -EINVAL; + + cpu_clock = __clock_id_to_func(query.clockid); + if (!cpu_clock) + return -EINVAL; + + engine = intel_engine_lookup_user(i915, + query.engine.engine_class, + query.engine.engine_instance); + if (!engine) + return -EINVAL; + + if (GRAPHICS_VER(i915) == 6 && + query.engine.engine_class != I915_ENGINE_CLASS_RENDER) + return -ENODEV; + + query.cs_frequency = engine->gt->clock_frequency; + ret = __query_cs_cycles(engine, + &query.cs_cycles, + &query.cpu_timestamp, + &query.cpu_delta, + cpu_clock); + if (ret) + return ret; + + if (put_user(query.cs_frequency, &query_ptr->cs_frequency)) + return -EFAULT; + + if (put_user(query.cpu_timestamp, &query_ptr->cpu_timestamp)) + return -EFAULT; + + if (put_user(query.cpu_delta, &query_ptr->cpu_delta)) + return -EFAULT; + + if (put_user(query.cs_cycles, &query_ptr->cs_cycles)) + return -EFAULT; + + return sizeof(query); +} + static int query_engine_info(struct drm_i915_private *i915, struct drm_i915_query_item *query_item) @@ -424,6 +571,7 @@ static int (* const i915_query_funcs[])(struct drm_i915_private *dev_priv, query_topology_info, query_engine_info, query_perf_config, + query_cs_cycles, }; int i915_query_ioctl(struct drm_device *dev, void *data, struct drm_file *file) diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index 6a34243a7646..0b4c27092d41 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -2230,6 +2230,10 @@ struct drm_i915_query_item { #define DRM_I915_QUERY_TOPOLOGY_INFO 1 #define DRM_I915_QUERY_ENGINE_INFO 2 #define DRM_I915_QUERY_PERF_CONFIG 3 + /** + * Query Command Streamer timestamp register. + */ +#define DRM_I915_QUERY_CS_CYCLES 4 /* Must be kept compact -- no holes and well documented */ /** @@ -2397,6 +2401,54 @@ struct drm_i915_engine_info { __u64 rsvd1[4]; }; +/** + * struct drm_i915_query_cs_cycles + * + * The query returns the command streamer cycles and the frequency that can be + * used to calculate the command streamer timestamp. In addition the query + * returns a set of cpu timestamps that indicate when the command streamer cycle + * count was captured. + */ +struct drm_i915_query_cs_cycles { + /** Engine for which command streamer cycles is queried. */ + struct i915_engine_class_instance engine; + + /** Must be zero. */ + __u32 flags; + + /** + * Command streamer cycles as read from the command streamer + * register at 0x358 offset. + */ + __u64 cs_cycles; + + /** Frequency of the cs cycles in Hz. */ + __u64 cs_frequency; + + /** + * CPU timestamp in ns. The timestamp is captured before reading the + * cs_cycles register using the reference clockid set by the user. + */ + __u64 cpu_timestamp; + + /** + * Time delta in ns captured around reading the lower dword of the + * cs_cycles register. + */ + __u64 cpu_delta; + + /** + * Reference clock id for CPU timestamp. For definition, see + * clock_gettime(2) and perf_event_open(2). Supported clock ids are + * CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_REALTIME, CLOCK_BOOTTIME, + * CLOCK_TAI. + */ + __s32 clockid; + + /** Must be zero. */ + __u32 rsvd; +}; + /** * struct drm_i915_query_engine_info * -- 2.20.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy 2021-04-29 0:34 ` [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy Umesh Nerlige Ramappa @ 2021-04-29 8:34 ` Lionel Landwerlin 2021-04-29 19:07 ` Jason Ekstrand 1 sibling, 0 replies; 19+ messages in thread From: Lionel Landwerlin @ 2021-04-29 8:34 UTC (permalink / raw) To: Umesh Nerlige Ramappa, intel-gfx; +Cc: dri-devel On 29/04/2021 03:34, Umesh Nerlige Ramappa wrote: > Perf measurements rely on CPU and engine timestamps to correlate > events of interest across these time domains. Current mechanisms get > these timestamps separately and the calculated delta between these > timestamps lack enough accuracy. > > To improve the accuracy of these time measurements to within a few us, > add a query that returns the engine and cpu timestamps captured as > close to each other as possible. > > v2: (Tvrtko) > - document clock reference used > - return cpu timestamp always > - capture cpu time just before lower dword of cs timestamp > > v3: (Chris) > - use uncore-rpm > - use __query_cs_timestamp helper > > v4: (Lionel) > - Kernel perf subsytem allows users to specify the clock id to be used > in perf_event_open. This clock id is used by the perf subsystem to > return the appropriate cpu timestamp in perf events. Similarly, let > the user pass the clockid to this query so that cpu timestamp > corresponds to the clock id requested. > > v5: (Tvrtko) > - Use normal ktime accessors instead of fast versions > - Add more uApi documentation > > v6: (Lionel) > - Move switch out of spinlock > > v7: (Chris) > - cs_timestamp is a misnomer, use cs_cycles instead > - return the cs cycle frequency as well in the query > > v8: > - Add platform and engine specific checks > > v9: (Lionel) > - Return 2 cpu timestamps in the query - captured before and after the > register read > > v10: (Chris) > - Use local_clock() to measure time taken to read lower dword of > register and return it to user. > > v11: (Jani) > - IS_GEN deprecated. User GRAPHICS_VER instead. > > v12: (Jason) > - Split cpu timestamp array into timestamp and delta for cleaner API > > Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> > Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Thanks for the update : Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> > --- > drivers/gpu/drm/i915/i915_query.c | 148 ++++++++++++++++++++++++++++++ > include/uapi/drm/i915_drm.h | 52 +++++++++++ > 2 files changed, 200 insertions(+) > > diff --git a/drivers/gpu/drm/i915/i915_query.c b/drivers/gpu/drm/i915/i915_query.c > index fed337ad7b68..357c44e8177c 100644 > --- a/drivers/gpu/drm/i915/i915_query.c > +++ b/drivers/gpu/drm/i915/i915_query.c > @@ -6,6 +6,8 @@ > > #include <linux/nospec.h> > > +#include "gt/intel_engine_pm.h" > +#include "gt/intel_engine_user.h" > #include "i915_drv.h" > #include "i915_perf.h" > #include "i915_query.h" > @@ -90,6 +92,151 @@ static int query_topology_info(struct drm_i915_private *dev_priv, > return total_length; > } > > +typedef u64 (*__ktime_func_t)(void); > +static __ktime_func_t __clock_id_to_func(clockid_t clk_id) > +{ > + /* > + * Use logic same as the perf subsystem to allow user to select the > + * reference clock id to be used for timestamps. > + */ > + switch (clk_id) { > + case CLOCK_MONOTONIC: > + return &ktime_get_ns; > + case CLOCK_MONOTONIC_RAW: > + return &ktime_get_raw_ns; > + case CLOCK_REALTIME: > + return &ktime_get_real_ns; > + case CLOCK_BOOTTIME: > + return &ktime_get_boottime_ns; > + case CLOCK_TAI: > + return &ktime_get_clocktai_ns; > + default: > + return NULL; > + } > +} > + > +static inline int > +__read_timestamps(struct intel_uncore *uncore, > + i915_reg_t lower_reg, > + i915_reg_t upper_reg, > + u64 *cs_ts, > + u64 *cpu_ts, > + u64 *cpu_delta, > + __ktime_func_t cpu_clock) > +{ > + u32 upper, lower, old_upper, loop = 0; > + > + upper = intel_uncore_read_fw(uncore, upper_reg); > + do { > + *cpu_delta = local_clock(); > + *cpu_ts = cpu_clock(); > + lower = intel_uncore_read_fw(uncore, lower_reg); > + *cpu_delta = local_clock() - *cpu_delta; > + old_upper = upper; > + upper = intel_uncore_read_fw(uncore, upper_reg); > + } while (upper != old_upper && loop++ < 2); > + > + *cs_ts = (u64)upper << 32 | lower; > + > + return 0; > +} > + > +static int > +__query_cs_cycles(struct intel_engine_cs *engine, > + u64 *cs_ts, u64 *cpu_ts, u64 *cpu_delta, > + __ktime_func_t cpu_clock) > +{ > + struct intel_uncore *uncore = engine->uncore; > + enum forcewake_domains fw_domains; > + u32 base = engine->mmio_base; > + intel_wakeref_t wakeref; > + int ret; > + > + fw_domains = intel_uncore_forcewake_for_reg(uncore, > + RING_TIMESTAMP(base), > + FW_REG_READ); > + > + with_intel_runtime_pm(uncore->rpm, wakeref) { > + spin_lock_irq(&uncore->lock); > + intel_uncore_forcewake_get__locked(uncore, fw_domains); > + > + ret = __read_timestamps(uncore, > + RING_TIMESTAMP(base), > + RING_TIMESTAMP_UDW(base), > + cs_ts, > + cpu_ts, > + cpu_delta, > + cpu_clock); > + > + intel_uncore_forcewake_put__locked(uncore, fw_domains); > + spin_unlock_irq(&uncore->lock); > + } > + > + return ret; > +} > + > +static int > +query_cs_cycles(struct drm_i915_private *i915, > + struct drm_i915_query_item *query_item) > +{ > + struct drm_i915_query_cs_cycles __user *query_ptr; > + struct drm_i915_query_cs_cycles query; > + struct intel_engine_cs *engine; > + __ktime_func_t cpu_clock; > + int ret; > + > + if (GRAPHICS_VER(i915) < 6) > + return -ENODEV; > + > + query_ptr = u64_to_user_ptr(query_item->data_ptr); > + ret = copy_query_item(&query, sizeof(query), sizeof(query), query_item); > + if (ret != 0) > + return ret; > + > + if (query.flags) > + return -EINVAL; > + > + if (query.rsvd) > + return -EINVAL; > + > + cpu_clock = __clock_id_to_func(query.clockid); > + if (!cpu_clock) > + return -EINVAL; > + > + engine = intel_engine_lookup_user(i915, > + query.engine.engine_class, > + query.engine.engine_instance); > + if (!engine) > + return -EINVAL; > + > + if (GRAPHICS_VER(i915) == 6 && > + query.engine.engine_class != I915_ENGINE_CLASS_RENDER) > + return -ENODEV; > + > + query.cs_frequency = engine->gt->clock_frequency; > + ret = __query_cs_cycles(engine, > + &query.cs_cycles, > + &query.cpu_timestamp, > + &query.cpu_delta, > + cpu_clock); > + if (ret) > + return ret; > + > + if (put_user(query.cs_frequency, &query_ptr->cs_frequency)) > + return -EFAULT; > + > + if (put_user(query.cpu_timestamp, &query_ptr->cpu_timestamp)) > + return -EFAULT; > + > + if (put_user(query.cpu_delta, &query_ptr->cpu_delta)) > + return -EFAULT; > + > + if (put_user(query.cs_cycles, &query_ptr->cs_cycles)) > + return -EFAULT; > + > + return sizeof(query); > +} > + > static int > query_engine_info(struct drm_i915_private *i915, > struct drm_i915_query_item *query_item) > @@ -424,6 +571,7 @@ static int (* const i915_query_funcs[])(struct drm_i915_private *dev_priv, > query_topology_info, > query_engine_info, > query_perf_config, > + query_cs_cycles, > }; > > int i915_query_ioctl(struct drm_device *dev, void *data, struct drm_file *file) > diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h > index 6a34243a7646..0b4c27092d41 100644 > --- a/include/uapi/drm/i915_drm.h > +++ b/include/uapi/drm/i915_drm.h > @@ -2230,6 +2230,10 @@ struct drm_i915_query_item { > #define DRM_I915_QUERY_TOPOLOGY_INFO 1 > #define DRM_I915_QUERY_ENGINE_INFO 2 > #define DRM_I915_QUERY_PERF_CONFIG 3 > + /** > + * Query Command Streamer timestamp register. > + */ > +#define DRM_I915_QUERY_CS_CYCLES 4 > /* Must be kept compact -- no holes and well documented */ > > /** > @@ -2397,6 +2401,54 @@ struct drm_i915_engine_info { > __u64 rsvd1[4]; > }; > > +/** > + * struct drm_i915_query_cs_cycles > + * > + * The query returns the command streamer cycles and the frequency that can be > + * used to calculate the command streamer timestamp. In addition the query > + * returns a set of cpu timestamps that indicate when the command streamer cycle > + * count was captured. > + */ > +struct drm_i915_query_cs_cycles { > + /** Engine for which command streamer cycles is queried. */ > + struct i915_engine_class_instance engine; > + > + /** Must be zero. */ > + __u32 flags; > + > + /** > + * Command streamer cycles as read from the command streamer > + * register at 0x358 offset. > + */ > + __u64 cs_cycles; > + > + /** Frequency of the cs cycles in Hz. */ > + __u64 cs_frequency; > + > + /** > + * CPU timestamp in ns. The timestamp is captured before reading the > + * cs_cycles register using the reference clockid set by the user. > + */ > + __u64 cpu_timestamp; > + > + /** > + * Time delta in ns captured around reading the lower dword of the > + * cs_cycles register. > + */ > + __u64 cpu_delta; > + > + /** > + * Reference clock id for CPU timestamp. For definition, see > + * clock_gettime(2) and perf_event_open(2). Supported clock ids are > + * CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_REALTIME, CLOCK_BOOTTIME, > + * CLOCK_TAI. > + */ > + __s32 clockid; > + > + /** Must be zero. */ > + __u32 rsvd; > +}; > + > /** > * struct drm_i915_query_engine_info > * _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy 2021-04-29 0:34 ` [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy Umesh Nerlige Ramappa 2021-04-29 8:34 ` Lionel Landwerlin @ 2021-04-29 19:07 ` Jason Ekstrand 2021-04-30 22:26 ` Umesh Nerlige Ramappa 1 sibling, 1 reply; 19+ messages in thread From: Jason Ekstrand @ 2021-04-29 19:07 UTC (permalink / raw) To: Umesh Nerlige Ramappa; +Cc: Intel GFX, Maling list - DRI developers On Wed, Apr 28, 2021 at 7:34 PM Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> wrote: > > Perf measurements rely on CPU and engine timestamps to correlate > events of interest across these time domains. Current mechanisms get > these timestamps separately and the calculated delta between these > timestamps lack enough accuracy. > > To improve the accuracy of these time measurements to within a few us, > add a query that returns the engine and cpu timestamps captured as > close to each other as possible. > > v2: (Tvrtko) > - document clock reference used > - return cpu timestamp always > - capture cpu time just before lower dword of cs timestamp > > v3: (Chris) > - use uncore-rpm > - use __query_cs_timestamp helper > > v4: (Lionel) > - Kernel perf subsytem allows users to specify the clock id to be used > in perf_event_open. This clock id is used by the perf subsystem to > return the appropriate cpu timestamp in perf events. Similarly, let > the user pass the clockid to this query so that cpu timestamp > corresponds to the clock id requested. > > v5: (Tvrtko) > - Use normal ktime accessors instead of fast versions > - Add more uApi documentation > > v6: (Lionel) > - Move switch out of spinlock > > v7: (Chris) > - cs_timestamp is a misnomer, use cs_cycles instead > - return the cs cycle frequency as well in the query > > v8: > - Add platform and engine specific checks > > v9: (Lionel) > - Return 2 cpu timestamps in the query - captured before and after the > register read > > v10: (Chris) > - Use local_clock() to measure time taken to read lower dword of > register and return it to user. > > v11: (Jani) > - IS_GEN deprecated. User GRAPHICS_VER instead. > > v12: (Jason) > - Split cpu timestamp array into timestamp and delta for cleaner API > > Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> > Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> > --- > drivers/gpu/drm/i915/i915_query.c | 148 ++++++++++++++++++++++++++++++ > include/uapi/drm/i915_drm.h | 52 +++++++++++ > 2 files changed, 200 insertions(+) > > diff --git a/drivers/gpu/drm/i915/i915_query.c b/drivers/gpu/drm/i915/i915_query.c > index fed337ad7b68..357c44e8177c 100644 > --- a/drivers/gpu/drm/i915/i915_query.c > +++ b/drivers/gpu/drm/i915/i915_query.c > @@ -6,6 +6,8 @@ > > #include <linux/nospec.h> > > +#include "gt/intel_engine_pm.h" > +#include "gt/intel_engine_user.h" > #include "i915_drv.h" > #include "i915_perf.h" > #include "i915_query.h" > @@ -90,6 +92,151 @@ static int query_topology_info(struct drm_i915_private *dev_priv, > return total_length; > } > > +typedef u64 (*__ktime_func_t)(void); > +static __ktime_func_t __clock_id_to_func(clockid_t clk_id) > +{ > + /* > + * Use logic same as the perf subsystem to allow user to select the > + * reference clock id to be used for timestamps. > + */ > + switch (clk_id) { > + case CLOCK_MONOTONIC: > + return &ktime_get_ns; > + case CLOCK_MONOTONIC_RAW: > + return &ktime_get_raw_ns; > + case CLOCK_REALTIME: > + return &ktime_get_real_ns; > + case CLOCK_BOOTTIME: > + return &ktime_get_boottime_ns; > + case CLOCK_TAI: > + return &ktime_get_clocktai_ns; > + default: > + return NULL; > + } > +} > + > +static inline int > +__read_timestamps(struct intel_uncore *uncore, > + i915_reg_t lower_reg, > + i915_reg_t upper_reg, > + u64 *cs_ts, > + u64 *cpu_ts, > + u64 *cpu_delta, > + __ktime_func_t cpu_clock) > +{ > + u32 upper, lower, old_upper, loop = 0; > + > + upper = intel_uncore_read_fw(uncore, upper_reg); > + do { > + *cpu_delta = local_clock(); > + *cpu_ts = cpu_clock(); > + lower = intel_uncore_read_fw(uncore, lower_reg); > + *cpu_delta = local_clock() - *cpu_delta; > + old_upper = upper; > + upper = intel_uncore_read_fw(uncore, upper_reg); > + } while (upper != old_upper && loop++ < 2); > + > + *cs_ts = (u64)upper << 32 | lower; > + > + return 0; > +} > + > +static int > +__query_cs_cycles(struct intel_engine_cs *engine, > + u64 *cs_ts, u64 *cpu_ts, u64 *cpu_delta, > + __ktime_func_t cpu_clock) > +{ > + struct intel_uncore *uncore = engine->uncore; > + enum forcewake_domains fw_domains; > + u32 base = engine->mmio_base; > + intel_wakeref_t wakeref; > + int ret; > + > + fw_domains = intel_uncore_forcewake_for_reg(uncore, > + RING_TIMESTAMP(base), > + FW_REG_READ); > + > + with_intel_runtime_pm(uncore->rpm, wakeref) { > + spin_lock_irq(&uncore->lock); > + intel_uncore_forcewake_get__locked(uncore, fw_domains); > + > + ret = __read_timestamps(uncore, > + RING_TIMESTAMP(base), > + RING_TIMESTAMP_UDW(base), > + cs_ts, > + cpu_ts, > + cpu_delta, > + cpu_clock); > + > + intel_uncore_forcewake_put__locked(uncore, fw_domains); > + spin_unlock_irq(&uncore->lock); > + } > + > + return ret; > +} > + > +static int > +query_cs_cycles(struct drm_i915_private *i915, > + struct drm_i915_query_item *query_item) > +{ > + struct drm_i915_query_cs_cycles __user *query_ptr; > + struct drm_i915_query_cs_cycles query; > + struct intel_engine_cs *engine; > + __ktime_func_t cpu_clock; > + int ret; > + > + if (GRAPHICS_VER(i915) < 6) > + return -ENODEV; > + > + query_ptr = u64_to_user_ptr(query_item->data_ptr); > + ret = copy_query_item(&query, sizeof(query), sizeof(query), query_item); > + if (ret != 0) > + return ret; > + > + if (query.flags) > + return -EINVAL; > + > + if (query.rsvd) > + return -EINVAL; > + > + cpu_clock = __clock_id_to_func(query.clockid); > + if (!cpu_clock) > + return -EINVAL; > + > + engine = intel_engine_lookup_user(i915, > + query.engine.engine_class, > + query.engine.engine_instance); > + if (!engine) > + return -EINVAL; > + > + if (GRAPHICS_VER(i915) == 6 && > + query.engine.engine_class != I915_ENGINE_CLASS_RENDER) > + return -ENODEV; > + > + query.cs_frequency = engine->gt->clock_frequency; > + ret = __query_cs_cycles(engine, > + &query.cs_cycles, > + &query.cpu_timestamp, > + &query.cpu_delta, > + cpu_clock); > + if (ret) > + return ret; > + > + if (put_user(query.cs_frequency, &query_ptr->cs_frequency)) > + return -EFAULT; > + > + if (put_user(query.cpu_timestamp, &query_ptr->cpu_timestamp)) > + return -EFAULT; > + > + if (put_user(query.cpu_delta, &query_ptr->cpu_delta)) > + return -EFAULT; > + > + if (put_user(query.cs_cycles, &query_ptr->cs_cycles)) > + return -EFAULT; > + > + return sizeof(query); > +} > + > static int > query_engine_info(struct drm_i915_private *i915, > struct drm_i915_query_item *query_item) > @@ -424,6 +571,7 @@ static int (* const i915_query_funcs[])(struct drm_i915_private *dev_priv, > query_topology_info, > query_engine_info, > query_perf_config, > + query_cs_cycles, > }; > > int i915_query_ioctl(struct drm_device *dev, void *data, struct drm_file *file) > diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h > index 6a34243a7646..0b4c27092d41 100644 > --- a/include/uapi/drm/i915_drm.h > +++ b/include/uapi/drm/i915_drm.h > @@ -2230,6 +2230,10 @@ struct drm_i915_query_item { > #define DRM_I915_QUERY_TOPOLOGY_INFO 1 > #define DRM_I915_QUERY_ENGINE_INFO 2 > #define DRM_I915_QUERY_PERF_CONFIG 3 > + /** > + * Query Command Streamer timestamp register. > + */ > +#define DRM_I915_QUERY_CS_CYCLES 4 > /* Must be kept compact -- no holes and well documented */ > > /** > @@ -2397,6 +2401,54 @@ struct drm_i915_engine_info { > __u64 rsvd1[4]; > }; > > +/** > + * struct drm_i915_query_cs_cycles > + * > + * The query returns the command streamer cycles and the frequency that can be > + * used to calculate the command streamer timestamp. In addition the query > + * returns a set of cpu timestamps that indicate when the command streamer cycle > + * count was captured. > + */ > +struct drm_i915_query_cs_cycles { > + /** Engine for which command streamer cycles is queried. */ > + struct i915_engine_class_instance engine; I've checked with HW engineers and they're claiming that all CS timestamp registers should report the same time modulo minor drift. You're CC'd on the internal e-mail. If this is really the case, then I don't think we want to put an engine in this query. --Jason > + > + /** Must be zero. */ > + __u32 flags; > + > + /** > + * Command streamer cycles as read from the command streamer > + * register at 0x358 offset. > + */ > + __u64 cs_cycles; > + > + /** Frequency of the cs cycles in Hz. */ > + __u64 cs_frequency; > + > + /** > + * CPU timestamp in ns. The timestamp is captured before reading the > + * cs_cycles register using the reference clockid set by the user. > + */ > + __u64 cpu_timestamp; > + > + /** > + * Time delta in ns captured around reading the lower dword of the > + * cs_cycles register. > + */ > + __u64 cpu_delta; > + > + /** > + * Reference clock id for CPU timestamp. For definition, see > + * clock_gettime(2) and perf_event_open(2). Supported clock ids are > + * CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_REALTIME, CLOCK_BOOTTIME, > + * CLOCK_TAI. > + */ > + __s32 clockid; > + > + /** Must be zero. */ > + __u32 rsvd; > +}; > + > /** > * struct drm_i915_query_engine_info > * > -- > 2.20.1 > _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy 2021-04-29 19:07 ` Jason Ekstrand @ 2021-04-30 22:26 ` Umesh Nerlige Ramappa 2021-04-30 23:00 ` Dixit, Ashutosh 0 siblings, 1 reply; 19+ messages in thread From: Umesh Nerlige Ramappa @ 2021-04-30 22:26 UTC (permalink / raw) To: Jason Ekstrand; +Cc: Intel GFX, Maling list - DRI developers On Thu, Apr 29, 2021 at 02:07:58PM -0500, Jason Ekstrand wrote: >On Wed, Apr 28, 2021 at 7:34 PM Umesh Nerlige Ramappa ><umesh.nerlige.ramappa@intel.com> wrote: >> >> Perf measurements rely on CPU and engine timestamps to correlate >> events of interest across these time domains. Current mechanisms get >> these timestamps separately and the calculated delta between these >> timestamps lack enough accuracy. >> >> To improve the accuracy of these time measurements to within a few us, >> add a query that returns the engine and cpu timestamps captured as >> close to each other as possible. >> >> v2: (Tvrtko) >> - document clock reference used >> - return cpu timestamp always >> - capture cpu time just before lower dword of cs timestamp >> >> v3: (Chris) >> - use uncore-rpm >> - use __query_cs_timestamp helper >> >> v4: (Lionel) >> - Kernel perf subsytem allows users to specify the clock id to be used >> in perf_event_open. This clock id is used by the perf subsystem to >> return the appropriate cpu timestamp in perf events. Similarly, let >> the user pass the clockid to this query so that cpu timestamp >> corresponds to the clock id requested. >> >> v5: (Tvrtko) >> - Use normal ktime accessors instead of fast versions >> - Add more uApi documentation >> >> v6: (Lionel) >> - Move switch out of spinlock >> >> v7: (Chris) >> - cs_timestamp is a misnomer, use cs_cycles instead >> - return the cs cycle frequency as well in the query >> >> v8: >> - Add platform and engine specific checks >> >> v9: (Lionel) >> - Return 2 cpu timestamps in the query - captured before and after the >> register read >> >> v10: (Chris) >> - Use local_clock() to measure time taken to read lower dword of >> register and return it to user. >> >> v11: (Jani) >> - IS_GEN deprecated. User GRAPHICS_VER instead. >> >> v12: (Jason) >> - Split cpu timestamp array into timestamp and delta for cleaner API >> >> Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> >> Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> >> --- >> drivers/gpu/drm/i915/i915_query.c | 148 ++++++++++++++++++++++++++++++ >> include/uapi/drm/i915_drm.h | 52 +++++++++++ >> 2 files changed, 200 insertions(+) >> >> diff --git a/drivers/gpu/drm/i915/i915_query.c b/drivers/gpu/drm/i915/i915_query.c >> index fed337ad7b68..357c44e8177c 100644 >> --- a/drivers/gpu/drm/i915/i915_query.c >> +++ b/drivers/gpu/drm/i915/i915_query.c >> @@ -6,6 +6,8 @@ >> >> #include <linux/nospec.h> >> >> +#include "gt/intel_engine_pm.h" >> +#include "gt/intel_engine_user.h" >> #include "i915_drv.h" >> #include "i915_perf.h" >> #include "i915_query.h" >> @@ -90,6 +92,151 @@ static int query_topology_info(struct drm_i915_private *dev_priv, >> return total_length; >> } >> >> +typedef u64 (*__ktime_func_t)(void); >> +static __ktime_func_t __clock_id_to_func(clockid_t clk_id) >> +{ >> + /* >> + * Use logic same as the perf subsystem to allow user to select the >> + * reference clock id to be used for timestamps. >> + */ >> + switch (clk_id) { >> + case CLOCK_MONOTONIC: >> + return &ktime_get_ns; >> + case CLOCK_MONOTONIC_RAW: >> + return &ktime_get_raw_ns; >> + case CLOCK_REALTIME: >> + return &ktime_get_real_ns; >> + case CLOCK_BOOTTIME: >> + return &ktime_get_boottime_ns; >> + case CLOCK_TAI: >> + return &ktime_get_clocktai_ns; >> + default: >> + return NULL; >> + } >> +} >> + >> +static inline int >> +__read_timestamps(struct intel_uncore *uncore, >> + i915_reg_t lower_reg, >> + i915_reg_t upper_reg, >> + u64 *cs_ts, >> + u64 *cpu_ts, >> + u64 *cpu_delta, >> + __ktime_func_t cpu_clock) >> +{ >> + u32 upper, lower, old_upper, loop = 0; >> + >> + upper = intel_uncore_read_fw(uncore, upper_reg); >> + do { >> + *cpu_delta = local_clock(); >> + *cpu_ts = cpu_clock(); >> + lower = intel_uncore_read_fw(uncore, lower_reg); >> + *cpu_delta = local_clock() - *cpu_delta; >> + old_upper = upper; >> + upper = intel_uncore_read_fw(uncore, upper_reg); >> + } while (upper != old_upper && loop++ < 2); >> + >> + *cs_ts = (u64)upper << 32 | lower; >> + >> + return 0; >> +} >> + >> +static int >> +__query_cs_cycles(struct intel_engine_cs *engine, >> + u64 *cs_ts, u64 *cpu_ts, u64 *cpu_delta, >> + __ktime_func_t cpu_clock) >> +{ >> + struct intel_uncore *uncore = engine->uncore; >> + enum forcewake_domains fw_domains; >> + u32 base = engine->mmio_base; >> + intel_wakeref_t wakeref; >> + int ret; >> + >> + fw_domains = intel_uncore_forcewake_for_reg(uncore, >> + RING_TIMESTAMP(base), >> + FW_REG_READ); >> + >> + with_intel_runtime_pm(uncore->rpm, wakeref) { >> + spin_lock_irq(&uncore->lock); >> + intel_uncore_forcewake_get__locked(uncore, fw_domains); >> + >> + ret = __read_timestamps(uncore, >> + RING_TIMESTAMP(base), >> + RING_TIMESTAMP_UDW(base), >> + cs_ts, >> + cpu_ts, >> + cpu_delta, >> + cpu_clock); >> + >> + intel_uncore_forcewake_put__locked(uncore, fw_domains); >> + spin_unlock_irq(&uncore->lock); >> + } >> + >> + return ret; >> +} >> + >> +static int >> +query_cs_cycles(struct drm_i915_private *i915, >> + struct drm_i915_query_item *query_item) >> +{ >> + struct drm_i915_query_cs_cycles __user *query_ptr; >> + struct drm_i915_query_cs_cycles query; >> + struct intel_engine_cs *engine; >> + __ktime_func_t cpu_clock; >> + int ret; >> + >> + if (GRAPHICS_VER(i915) < 6) >> + return -ENODEV; >> + >> + query_ptr = u64_to_user_ptr(query_item->data_ptr); >> + ret = copy_query_item(&query, sizeof(query), sizeof(query), query_item); >> + if (ret != 0) >> + return ret; >> + >> + if (query.flags) >> + return -EINVAL; >> + >> + if (query.rsvd) >> + return -EINVAL; >> + >> + cpu_clock = __clock_id_to_func(query.clockid); >> + if (!cpu_clock) >> + return -EINVAL; >> + >> + engine = intel_engine_lookup_user(i915, >> + query.engine.engine_class, >> + query.engine.engine_instance); >> + if (!engine) >> + return -EINVAL; >> + >> + if (GRAPHICS_VER(i915) == 6 && >> + query.engine.engine_class != I915_ENGINE_CLASS_RENDER) >> + return -ENODEV; >> + >> + query.cs_frequency = engine->gt->clock_frequency; >> + ret = __query_cs_cycles(engine, >> + &query.cs_cycles, >> + &query.cpu_timestamp, >> + &query.cpu_delta, >> + cpu_clock); >> + if (ret) >> + return ret; >> + >> + if (put_user(query.cs_frequency, &query_ptr->cs_frequency)) >> + return -EFAULT; >> + >> + if (put_user(query.cpu_timestamp, &query_ptr->cpu_timestamp)) >> + return -EFAULT; >> + >> + if (put_user(query.cpu_delta, &query_ptr->cpu_delta)) >> + return -EFAULT; >> + >> + if (put_user(query.cs_cycles, &query_ptr->cs_cycles)) >> + return -EFAULT; >> + >> + return sizeof(query); >> +} >> + >> static int >> query_engine_info(struct drm_i915_private *i915, >> struct drm_i915_query_item *query_item) >> @@ -424,6 +571,7 @@ static int (* const i915_query_funcs[])(struct drm_i915_private *dev_priv, >> query_topology_info, >> query_engine_info, >> query_perf_config, >> + query_cs_cycles, >> }; >> >> int i915_query_ioctl(struct drm_device *dev, void *data, struct drm_file *file) >> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h >> index 6a34243a7646..0b4c27092d41 100644 >> --- a/include/uapi/drm/i915_drm.h >> +++ b/include/uapi/drm/i915_drm.h >> @@ -2230,6 +2230,10 @@ struct drm_i915_query_item { >> #define DRM_I915_QUERY_TOPOLOGY_INFO 1 >> #define DRM_I915_QUERY_ENGINE_INFO 2 >> #define DRM_I915_QUERY_PERF_CONFIG 3 >> + /** >> + * Query Command Streamer timestamp register. >> + */ >> +#define DRM_I915_QUERY_CS_CYCLES 4 >> /* Must be kept compact -- no holes and well documented */ >> >> /** >> @@ -2397,6 +2401,54 @@ struct drm_i915_engine_info { >> __u64 rsvd1[4]; >> }; >> >> +/** >> + * struct drm_i915_query_cs_cycles >> + * >> + * The query returns the command streamer cycles and the frequency that can be >> + * used to calculate the command streamer timestamp. In addition the query >> + * returns a set of cpu timestamps that indicate when the command streamer cycle >> + * count was captured. >> + */ >> +struct drm_i915_query_cs_cycles { >> + /** Engine for which command streamer cycles is queried. */ >> + struct i915_engine_class_instance engine; > >I've checked with HW engineers and they're claiming that all CS >timestamp registers should report the same time modulo minor drift. >You're CC'd on the internal e-mail. If this is really the case, then >I don't think we want to put an engine in this query. Looks like the engine can be dropped since all timestamps are in sync. I just have one more question here. The timestamp itself is 36 bits. Should the uapi also report the timestamp width to the user OR should I just return the lower 32 bits of the timestamp? Thanks, Umesh > >--Jason > >> + >> + /** Must be zero. */ >> + __u32 flags; >> + >> + /** >> + * Command streamer cycles as read from the command streamer >> + * register at 0x358 offset. >> + */ >> + __u64 cs_cycles; >> + >> + /** Frequency of the cs cycles in Hz. */ >> + __u64 cs_frequency; >> + >> + /** >> + * CPU timestamp in ns. The timestamp is captured before reading the >> + * cs_cycles register using the reference clockid set by the user. >> + */ >> + __u64 cpu_timestamp; >> + >> + /** >> + * Time delta in ns captured around reading the lower dword of the >> + * cs_cycles register. >> + */ >> + __u64 cpu_delta; >> + >> + /** >> + * Reference clock id for CPU timestamp. For definition, see >> + * clock_gettime(2) and perf_event_open(2). Supported clock ids are >> + * CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_REALTIME, CLOCK_BOOTTIME, >> + * CLOCK_TAI. >> + */ >> + __s32 clockid; >> + >> + /** Must be zero. */ >> + __u32 rsvd; >> +}; >> + >> /** >> * struct drm_i915_query_engine_info >> * >> -- >> 2.20.1 >> _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy 2021-04-30 22:26 ` Umesh Nerlige Ramappa @ 2021-04-30 23:00 ` Dixit, Ashutosh 2021-04-30 23:23 ` Dixit, Ashutosh 2021-05-01 0:35 ` Jason Ekstrand 0 siblings, 2 replies; 19+ messages in thread From: Dixit, Ashutosh @ 2021-04-30 23:00 UTC (permalink / raw) To: Umesh Nerlige Ramappa; +Cc: Intel GFX, Maling list - DRI developers On Fri, 30 Apr 2021 15:26:09 -0700, Umesh Nerlige Ramappa wrote: > > Looks like the engine can be dropped since all timestamps are in sync. I > just have one more question here. The timestamp itself is 36 bits. Should > the uapi also report the timestamp width to the user OR should I just > return the lower 32 bits of the timestamp? How would exposing only the lower 32 bits of the timestamp work? The way to avoid exposing the width would be to expose the timestamp as a regular 64 bit value. In the kernel engine state, have a variable for the counter and keep on accumulating that (on each query) to full 64 bits in spite of the 36 bit HW counter overflow. So not exposing the width (or exposing a 64 bit timestamp) is a cleaner interface but also more work in the kernel. _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy 2021-04-30 23:00 ` Dixit, Ashutosh @ 2021-04-30 23:23 ` Dixit, Ashutosh 2021-05-01 0:35 ` Jason Ekstrand 1 sibling, 0 replies; 19+ messages in thread From: Dixit, Ashutosh @ 2021-04-30 23:23 UTC (permalink / raw) To: Umesh Nerlige Ramappa; +Cc: Intel GFX, Maling list - DRI developers On Fri, 30 Apr 2021 16:00:46 -0700, Dixit, Ashutosh wrote: > > On Fri, 30 Apr 2021 15:26:09 -0700, Umesh Nerlige Ramappa wrote: > > > > Looks like the engine can be dropped since all timestamps are in sync. I > > just have one more question here. The timestamp itself is 36 bits. Should > > the uapi also report the timestamp width to the user OR should I just > > return the lower 32 bits of the timestamp? > > How would exposing only the lower 32 bits of the timestamp work? It would work I guess but overflow every few seconds. So if the counters are sampled at a low frequency (once every few seconds) it would yield misleading timestamps. > The way to avoid exposing the width would be to expose the timestamp as a > regular 64 bit value. In the kernel engine state, have a variable for the > counter and keep on accumulating that (on each query) to full 64 bits in > spite of the 36 bit HW counter overflow. > > So not exposing the width (or exposing a 64 bit timestamp) is a cleaner > interface but also more work in the kernel. _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy 2021-04-30 23:00 ` Dixit, Ashutosh 2021-04-30 23:23 ` Dixit, Ashutosh @ 2021-05-01 0:35 ` Jason Ekstrand 2021-05-01 2:19 ` Umesh Nerlige Ramappa 1 sibling, 1 reply; 19+ messages in thread From: Jason Ekstrand @ 2021-05-01 0:35 UTC (permalink / raw) To: Dixit, Ashutosh, Umesh Nerlige Ramappa Cc: Intel GFX, Maling list - DRI developers [-- Attachment #1.1: Type: text/plain, Size: 1105 bytes --] On April 30, 2021 18:00:58 "Dixit, Ashutosh" <ashutosh.dixit@intel.com> wrote: > On Fri, 30 Apr 2021 15:26:09 -0700, Umesh Nerlige Ramappa wrote: >> >> Looks like the engine can be dropped since all timestamps are in sync. I >> just have one more question here. The timestamp itself is 36 bits. Should >> the uapi also report the timestamp width to the user OR should I just >> return the lower 32 bits of the timestamp? Yeah, I think reporting the timestamp width is a good idea since we're reporting the period/frequency here. >> > How would exposing only the lower 32 bits of the timestamp work? > > The way to avoid exposing the width would be to expose the timestamp as a > regular 64 bit value. In the kernel engine state, have a variable for the > counter and keep on accumulating that (on each query) to full 64 bits in > spite of the 36 bit HW counter overflow. That's doesn't actually work since you can query the 64-bit timestamp value from the GPU. The way this is handled in Vulkan is that the number of timestamp bits is reported to the application as a queue property. --Jason > [-- Attachment #1.2: Type: text/html, Size: 2701 bytes --] [-- Attachment #2: Type: text/plain, Size: 160 bytes --] _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy 2021-05-01 0:35 ` Jason Ekstrand @ 2021-05-01 2:19 ` Umesh Nerlige Ramappa 2021-05-01 4:01 ` Dixit, Ashutosh 0 siblings, 1 reply; 19+ messages in thread From: Umesh Nerlige Ramappa @ 2021-05-01 2:19 UTC (permalink / raw) To: Jason Ekstrand; +Cc: Intel GFX, Maling list - DRI developers On Fri, Apr 30, 2021 at 07:35:41PM -0500, Jason Ekstrand wrote: > On April 30, 2021 18:00:58 "Dixit, Ashutosh" <ashutosh.dixit@intel.com> > wrote: > > On Fri, 30 Apr 2021 15:26:09 -0700, Umesh Nerlige Ramappa wrote: > > Looks like the engine can be dropped since all timestamps are in sync. > I > just have one more question here. The timestamp itself is 36 bits. > Should > the uapi also report the timestamp width to the user OR should I just > return the lower 32 bits of the timestamp? > > Yeah, I think reporting the timestamp width is a good idea since we're > reporting the period/frequency here. Actually, I forgot that we are handling the overflow before returning the cs_cycles to the user and overflow handling was the only reason I thought user should know the width. Would you stil recommend returning the width in the uapi? Thanks, Umesh > > How would exposing only the lower 32 bits of the timestamp work? > The way to avoid exposing the width would be to expose the timestamp as > a > regular 64 bit value. In the kernel engine state, have a variable for > the > counter and keep on accumulating that (on each query) to full 64 bits in > spite of the 36 bit HW counter overflow. > > That's doesn't actually work since you can query the 64-bit timestamp > value from the GPU. The way this is handled in Vulkan is that the number > of timestamp bits is reported to the application as a queue property. > --Jason _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy 2021-05-01 2:19 ` Umesh Nerlige Ramappa @ 2021-05-01 4:01 ` Dixit, Ashutosh 2021-05-01 15:27 ` Jason Ekstrand 0 siblings, 1 reply; 19+ messages in thread From: Dixit, Ashutosh @ 2021-05-01 4:01 UTC (permalink / raw) To: Umesh Nerlige Ramappa; +Cc: Intel GFX, Maling list - DRI developers On Fri, 30 Apr 2021 19:19:59 -0700, Umesh Nerlige Ramappa wrote: > > On Fri, Apr 30, 2021 at 07:35:41PM -0500, Jason Ekstrand wrote: > > On April 30, 2021 18:00:58 "Dixit, Ashutosh" <ashutosh.dixit@intel.com> > > wrote: > > > > On Fri, 30 Apr 2021 15:26:09 -0700, Umesh Nerlige Ramappa wrote: > > > > Looks like the engine can be dropped since all timestamps are in sync. > > I > > just have one more question here. The timestamp itself is 36 bits. > > Should > > the uapi also report the timestamp width to the user OR should I just > > return the lower 32 bits of the timestamp? > > > > Yeah, I think reporting the timestamp width is a good idea since we're > > reporting the period/frequency here. > > Actually, I forgot that we are handling the overflow before returning the > cs_cycles to the user and overflow handling was the only reason I thought > user should know the width. Would you stil recommend returning the width in > the uapi? The width is needed for userspace to figure out if overflow has occured between two successive query calls. I don't think I see this happening in the code. _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy 2021-05-01 4:01 ` Dixit, Ashutosh @ 2021-05-01 15:27 ` Jason Ekstrand 2021-05-03 18:29 ` Umesh Nerlige Ramappa 0 siblings, 1 reply; 19+ messages in thread From: Jason Ekstrand @ 2021-05-01 15:27 UTC (permalink / raw) To: Dixit, Ashutosh, Umesh Nerlige Ramappa Cc: Intel GFX, Maling list - DRI developers [-- Attachment #1.1: Type: text/plain, Size: 2309 bytes --] On April 30, 2021 23:01:44 "Dixit, Ashutosh" <ashutosh.dixit@intel.com> wrote: > On Fri, 30 Apr 2021 19:19:59 -0700, Umesh Nerlige Ramappa wrote: >> >> On Fri, Apr 30, 2021 at 07:35:41PM -0500, Jason Ekstrand wrote: >>> On April 30, 2021 18:00:58 "Dixit, Ashutosh" <ashutosh.dixit@intel.com> >>> wrote: >>> >>> On Fri, 30 Apr 2021 15:26:09 -0700, Umesh Nerlige Ramappa wrote: >>> >>> Looks like the engine can be dropped since all timestamps are in sync. >>> I >>> just have one more question here. The timestamp itself is 36 bits. >>> Should >>> the uapi also report the timestamp width to the user OR should I just >>> return the lower 32 bits of the timestamp? >>> >>> Yeah, I think reporting the timestamp width is a good idea since we're >>> reporting the period/frequency here. >> >> Actually, I forgot that we are handling the overflow before returning the >> cs_cycles to the user and overflow handling was the only reason I thought >> user should know the width. Would you stil recommend returning the width in >> the uapi? > > The width is needed for userspace to figure out if overflow has occured > between two successive query calls. I don't think I see this happening in > the code. Right... We (UMDs) currently just hard-code it to 36 bits because that's what we've had on all platforms since close enough to forever. We bake in the frequency based on PCI ID. Returning the number of bits, like I said, goes nicely with the frequency. It's not necessary, assuming sufficiently smart userspace (neither is frequency), but it seems to go with it. I guess I don't care much either way. Coming back to the multi-tile issue we discussed internally, I think that is something we should care about. Since this works by reading the timestamp register on an engine, I think leaving the engine specifier in there is fine. Userspace should know that there's actually only one clock and just query one of them (probably RCS). For crazy multi-device cases, we'll either query per logical device (read tile) or we'll have to make them look like a single device and sync the timestamps somehow in the UMD by carrying around an offset factor. As is, this patch is Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> I still need to review the ANV patch before we can land this though. --Jason [-- Attachment #1.2: Type: text/html, Size: 3849 bytes --] [-- Attachment #2: Type: text/plain, Size: 160 bytes --] _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy 2021-05-01 15:27 ` Jason Ekstrand @ 2021-05-03 18:29 ` Umesh Nerlige Ramappa 0 siblings, 0 replies; 19+ messages in thread From: Umesh Nerlige Ramappa @ 2021-05-03 18:29 UTC (permalink / raw) To: Jason Ekstrand; +Cc: Intel GFX, Maling list - DRI developers On Sat, May 01, 2021 at 10:27:03AM -0500, Jason Ekstrand wrote: > On April 30, 2021 23:01:44 "Dixit, Ashutosh" <ashutosh.dixit@intel.com> > wrote: > > On Fri, 30 Apr 2021 19:19:59 -0700, Umesh Nerlige Ramappa wrote: > > On Fri, Apr 30, 2021 at 07:35:41PM -0500, Jason Ekstrand wrote: > > On April 30, 2021 18:00:58 "Dixit, Ashutosh" > <ashutosh.dixit@intel.com> > wrote: > On Fri, 30 Apr 2021 15:26:09 -0700, Umesh Nerlige Ramappa wrote: > Looks like the engine can be dropped since all timestamps are in > sync. > I > just have one more question here. The timestamp itself is 36 bits. > Should > the uapi also report the timestamp width to the user OR should I > just > return the lower 32 bits of the timestamp? > Yeah, I think reporting the timestamp width is a good idea since > we're > reporting the period/frequency here. > > Actually, I forgot that we are handling the overflow before returning > the > cs_cycles to the user and overflow handling was the only reason I > thought > user should know the width. Would you stil recommend returning the > width in > the uapi? > > The width is needed for userspace to figure out if overflow has occured > between two successive query calls. I don't think I see this happening > in > the code. > > Right... We (UMDs) currently just hard-code it to 36 bits because that's > what we've had on all platforms since close enough to forever. We bake in > the frequency based on PCI ID. Returning the number of bits, like I said, > goes nicely with the frequency. It's not necessary, assuming sufficiently > smart userspace (neither is frequency), but it seems to go with it. I > guess I don't care much either way. > Coming back to the multi-tile issue we discussed internally, I think that > is something we should care about. Since this works by reading the > timestamp register on an engine, I think leaving the engine specifier in > there is fine. Userspace should know that there's actually only one clock > and just query one of them (probably RCS). For crazy multi-device cases, > we'll either query per logical device (read tile) or we'll have to make > them look like a single device and sync the timestamps somehow in the UMD > by carrying around an offset factor. > As is, this patch is > Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Thanks, I will add the width here and post the final version. Regards, Umesh > I still need to review the ANV patch before we can land this though. > --Jason _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Intel-gfx] ✗ Fi.CI.DOCS: warning for Add support for querying engine cycles 2021-04-29 0:34 [Intel-gfx] [PATCH 0/1] Add support for querying engine cycles Umesh Nerlige Ramappa 2021-04-29 0:34 ` [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy Umesh Nerlige Ramappa @ 2021-04-29 1:34 ` Patchwork 2021-04-29 1:59 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2021-04-29 3:26 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork 3 siblings, 0 replies; 19+ messages in thread From: Patchwork @ 2021-04-29 1:34 UTC (permalink / raw) To: Umesh Nerlige Ramappa; +Cc: intel-gfx == Series Details == Series: Add support for querying engine cycles URL : https://patchwork.freedesktop.org/series/89615/ State : warning == Summary == $ make htmldocs 2>&1 > /dev/null | grep i915 ./include/uapi/drm/i915_drm.h:2234: warning: Incorrect use of kernel-doc format: * Query Command Streamer timestamp register. ./include/uapi/drm/i915_drm.h:2420: warning: Incorrect use of kernel-doc format: * Command streamer cycles as read from the command streamer ./include/uapi/drm/i915_drm.h:2429: warning: Incorrect use of kernel-doc format: * CPU timestamp in ns. The timestamp is captured before reading the ./include/uapi/drm/i915_drm.h:2435: warning: Incorrect use of kernel-doc format: * Time delta in ns captured around reading the lower dword of the ./include/uapi/drm/i915_drm.h:2441: warning: Incorrect use of kernel-doc format: * Reference clock id for CPU timestamp. For definition, see ./include/uapi/drm/i915_drm.h:2450: warning: Function parameter or member 'engine' not described in 'drm_i915_query_cs_cycles' ./include/uapi/drm/i915_drm.h:2450: warning: Function parameter or member 'flags' not described in 'drm_i915_query_cs_cycles' ./include/uapi/drm/i915_drm.h:2450: warning: Function parameter or member 'cs_cycles' not described in 'drm_i915_query_cs_cycles' ./include/uapi/drm/i915_drm.h:2450: warning: Function parameter or member 'cs_frequency' not described in 'drm_i915_query_cs_cycles' ./include/uapi/drm/i915_drm.h:2450: warning: Function parameter or member 'cpu_timestamp' not described in 'drm_i915_query_cs_cycles' ./include/uapi/drm/i915_drm.h:2450: warning: Function parameter or member 'cpu_delta' not described in 'drm_i915_query_cs_cycles' ./include/uapi/drm/i915_drm.h:2450: warning: Function parameter or member 'clockid' not described in 'drm_i915_query_cs_cycles' ./include/uapi/drm/i915_drm.h:2450: warning: Function parameter or member 'rsvd' not described in 'drm_i915_query_cs_cycles' _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Intel-gfx] ✓ Fi.CI.BAT: success for Add support for querying engine cycles 2021-04-29 0:34 [Intel-gfx] [PATCH 0/1] Add support for querying engine cycles Umesh Nerlige Ramappa 2021-04-29 0:34 ` [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy Umesh Nerlige Ramappa 2021-04-29 1:34 ` [Intel-gfx] ✗ Fi.CI.DOCS: warning for Add support for querying engine cycles Patchwork @ 2021-04-29 1:59 ` Patchwork 2021-04-29 3:26 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork 3 siblings, 0 replies; 19+ messages in thread From: Patchwork @ 2021-04-29 1:59 UTC (permalink / raw) To: Umesh Nerlige Ramappa; +Cc: intel-gfx [-- Attachment #1.1: Type: text/plain, Size: 2830 bytes --] == Series Details == Series: Add support for querying engine cycles URL : https://patchwork.freedesktop.org/series/89615/ State : success == Summary == CI Bug Log - changes from CI_DRM_10023 -> Patchwork_20025 ==================================================== Summary ------- **SUCCESS** No regressions found. External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/index.html Known issues ------------ Here are the changes found in Patchwork_20025 that come from known issues: ### IGT changes ### #### Issues hit #### * igt@i915_pm_rpm@module-reload: - fi-kbl-7500u: [PASS][1] -> [DMESG-WARN][2] ([i915#2605]) [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/fi-kbl-7500u/igt@i915_pm_rpm@module-reload.html [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/fi-kbl-7500u/igt@i915_pm_rpm@module-reload.html * igt@runner@aborted: - fi-bdw-5557u: NOTRUN -> [FAIL][3] ([i915#1602] / [i915#2029]) [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/fi-bdw-5557u/igt@runner@aborted.html #### Possible fixes #### * igt@kms_frontbuffer_tracking@basic: - {fi-rkl-11500t}: [SKIP][4] ([i915#1849] / [i915#3180]) -> [PASS][5] [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/fi-rkl-11500t/igt@kms_frontbuffer_tracking@basic.html [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/fi-rkl-11500t/igt@kms_frontbuffer_tracking@basic.html {name}: This element is suppressed. This means it is ignored when computing the status of the difference (SUCCESS, WARNING, or FAILURE). [i915#1602]: https://gitlab.freedesktop.org/drm/intel/issues/1602 [i915#1849]: https://gitlab.freedesktop.org/drm/intel/issues/1849 [i915#2029]: https://gitlab.freedesktop.org/drm/intel/issues/2029 [i915#2605]: https://gitlab.freedesktop.org/drm/intel/issues/2605 [i915#3180]: https://gitlab.freedesktop.org/drm/intel/issues/3180 Participating hosts (44 -> 40) ------------------------------ Missing (4): fi-ilk-m540 fi-bsw-cyan fi-bdw-samus fi-hsw-4200u Build changes ------------- * IGT: IGT_6076 -> IGTPW_5769 * Linux: CI_DRM_10023 -> Patchwork_20025 CI-20190529: 20190529 CI_DRM_10023: a8bf9e284933fa5c1cb821b48ba95821e5d1cc3f @ git://anongit.freedesktop.org/gfx-ci/linux IGTPW_5769: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_5769/index.html IGT_6076: 9ab0820dbd07781161c1ace6973ea222fd24e53a @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools Patchwork_20025: 2ad9b6fced1ef0ba95279c3b6c9891829ce37694 @ git://anongit.freedesktop.org/gfx-ci/linux == Linux commits == 2ad9b6fced1e i915/query: Correlate engine and cpu timestamps with better accuracy == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/index.html [-- Attachment #1.2: Type: text/html, Size: 3503 bytes --] [-- Attachment #2: Type: text/plain, Size: 160 bytes --] _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Intel-gfx] ✓ Fi.CI.IGT: success for Add support for querying engine cycles 2021-04-29 0:34 [Intel-gfx] [PATCH 0/1] Add support for querying engine cycles Umesh Nerlige Ramappa ` (2 preceding siblings ...) 2021-04-29 1:59 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork @ 2021-04-29 3:26 ` Patchwork 3 siblings, 0 replies; 19+ messages in thread From: Patchwork @ 2021-04-29 3:26 UTC (permalink / raw) To: Umesh Nerlige Ramappa; +Cc: intel-gfx [-- Attachment #1.1: Type: text/plain, Size: 30261 bytes --] == Series Details == Series: Add support for querying engine cycles URL : https://patchwork.freedesktop.org/series/89615/ State : success == Summary == CI Bug Log - changes from CI_DRM_10023_full -> Patchwork_20025_full ==================================================== Summary ------- **SUCCESS** No regressions found. New tests --------- New tests have been introduced between CI_DRM_10023_full and Patchwork_20025_full: ### New IGT tests (2) ### * igt@i915_query@cs-cycles: - Statuses : 5 pass(s) - Exec time: [0.00, 0.13] s * igt@i915_query@cs-cycles-invalid: - Statuses : 7 pass(s) - Exec time: [0.00, 0.04] s Known issues ------------ Here are the changes found in Patchwork_20025_full that come from known issues: ### IGT changes ### #### Issues hit #### * igt@gem_create@create-massive: - shard-snb: NOTRUN -> [DMESG-WARN][1] ([i915#3002]) [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-snb2/igt@gem_create@create-massive.html - shard-kbl: NOTRUN -> [DMESG-WARN][2] ([i915#3002]) [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl2/igt@gem_create@create-massive.html - shard-skl: NOTRUN -> [DMESG-WARN][3] ([i915#3002]) [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl10/igt@gem_create@create-massive.html * igt@gem_ctx_isolation@preservation-s3@vecs0: - shard-kbl: [PASS][4] -> [DMESG-WARN][5] ([i915#180]) +1 similar issue [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-kbl3/igt@gem_ctx_isolation@preservation-s3@vecs0.html [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl4/igt@gem_ctx_isolation@preservation-s3@vecs0.html * igt@gem_ctx_persistence@clone: - shard-snb: NOTRUN -> [SKIP][6] ([fdo#109271] / [i915#1099]) +3 similar issues [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-snb6/igt@gem_ctx_persistence@clone.html * igt@gem_ctx_ringsize@active@bcs0: - shard-skl: [PASS][7] -> [INCOMPLETE][8] ([i915#3316]) [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-skl6/igt@gem_ctx_ringsize@active@bcs0.html [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl10/igt@gem_ctx_ringsize@active@bcs0.html * igt@gem_eio@unwedge-stress: - shard-tglb: [PASS][9] -> [TIMEOUT][10] ([i915#2369] / [i915#3063]) [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-tglb5/igt@gem_eio@unwedge-stress.html [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb1/igt@gem_eio@unwedge-stress.html * igt@gem_exec_fair@basic-pace@vcs0: - shard-iclb: NOTRUN -> [FAIL][11] ([i915#2842]) +1 similar issue [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb5/igt@gem_exec_fair@basic-pace@vcs0.html * igt@gem_exec_fair@basic-pace@vecs0: - shard-kbl: NOTRUN -> [FAIL][12] ([i915#2842]) [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl3/igt@gem_exec_fair@basic-pace@vecs0.html - shard-tglb: NOTRUN -> [FAIL][13] ([i915#2842]) +2 similar issues [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb1/igt@gem_exec_fair@basic-pace@vecs0.html * igt@gem_exec_whisper@basic-fds-priority-all: - shard-glk: [PASS][14] -> [DMESG-WARN][15] ([i915#118] / [i915#95]) [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-glk6/igt@gem_exec_whisper@basic-fds-priority-all.html [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-glk1/igt@gem_exec_whisper@basic-fds-priority-all.html * igt@gem_huc_copy@huc-copy: - shard-tglb: [PASS][16] -> [SKIP][17] ([i915#2190]) [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-tglb5/igt@gem_huc_copy@huc-copy.html [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb6/igt@gem_huc_copy@huc-copy.html * igt@gem_mmap_gtt@big-copy-xy: - shard-skl: [PASS][18] -> [FAIL][19] ([i915#307]) [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-skl6/igt@gem_mmap_gtt@big-copy-xy.html [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl4/igt@gem_mmap_gtt@big-copy-xy.html * igt@gem_render_copy@y-tiled-mc-ccs-to-vebox-yf-tiled: - shard-iclb: NOTRUN -> [SKIP][20] ([i915#768]) [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb7/igt@gem_render_copy@y-tiled-mc-ccs-to-vebox-yf-tiled.html * igt@gem_softpin@noreloc-s3: - shard-apl: NOTRUN -> [DMESG-WARN][21] ([i915#180]) +1 similar issue [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl6/igt@gem_softpin@noreloc-s3.html * igt@gem_userptr_blits@dmabuf-unsync: - shard-tglb: NOTRUN -> [SKIP][22] ([i915#3297]) +1 similar issue [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb6/igt@gem_userptr_blits@dmabuf-unsync.html - shard-iclb: NOTRUN -> [SKIP][23] ([i915#3297]) +1 similar issue [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb6/igt@gem_userptr_blits@dmabuf-unsync.html * igt@gem_userptr_blits@set-cache-level: - shard-snb: NOTRUN -> [FAIL][24] ([i915#3324]) [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-snb5/igt@gem_userptr_blits@set-cache-level.html - shard-skl: NOTRUN -> [FAIL][25] ([i915#3324]) [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl10/igt@gem_userptr_blits@set-cache-level.html - shard-tglb: NOTRUN -> [FAIL][26] ([i915#3324]) [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb3/igt@gem_userptr_blits@set-cache-level.html - shard-apl: NOTRUN -> [FAIL][27] ([i915#3324]) [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl1/igt@gem_userptr_blits@set-cache-level.html - shard-iclb: NOTRUN -> [FAIL][28] ([i915#3324]) [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb1/igt@gem_userptr_blits@set-cache-level.html - shard-glk: NOTRUN -> [FAIL][29] ([i915#3324]) [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-glk8/igt@gem_userptr_blits@set-cache-level.html * igt@gem_userptr_blits@vma-merge: - shard-apl: NOTRUN -> [FAIL][30] ([i915#3318]) [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl7/igt@gem_userptr_blits@vma-merge.html * igt@gem_workarounds@suspend-resume-fd: - shard-kbl: NOTRUN -> [DMESG-WARN][31] ([i915#180]) [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl6/igt@gem_workarounds@suspend-resume-fd.html * igt@i915_pm_rpm@modeset-pc8-residency-stress: - shard-apl: NOTRUN -> [SKIP][32] ([fdo#109271]) +257 similar issues [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl2/igt@i915_pm_rpm@modeset-pc8-residency-stress.html - shard-tglb: NOTRUN -> [SKIP][33] ([fdo#109506] / [i915#2411]) [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb6/igt@i915_pm_rpm@modeset-pc8-residency-stress.html - shard-iclb: NOTRUN -> [SKIP][34] ([fdo#109293] / [fdo#109506]) [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb2/igt@i915_pm_rpm@modeset-pc8-residency-stress.html * igt@i915_selftest@live@gt_pm: - shard-skl: NOTRUN -> [DMESG-FAIL][35] ([i915#1886] / [i915#2291]) [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl10/igt@i915_selftest@live@gt_pm.html * igt@i915_suspend@forcewake: - shard-skl: [PASS][36] -> [INCOMPLETE][37] ([i915#636]) [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-skl4/igt@i915_suspend@forcewake.html [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl8/igt@i915_suspend@forcewake.html * igt@kms_big_fb@linear-64bpp-rotate-90: - shard-tglb: NOTRUN -> [SKIP][38] ([fdo#111614]) +1 similar issue [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb5/igt@kms_big_fb@linear-64bpp-rotate-90.html * igt@kms_big_fb@x-tiled-32bpp-rotate-270: - shard-iclb: NOTRUN -> [SKIP][39] ([fdo#110725] / [fdo#111614]) +1 similar issue [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb6/igt@kms_big_fb@x-tiled-32bpp-rotate-270.html * igt@kms_big_joiner@basic: - shard-apl: NOTRUN -> [SKIP][40] ([fdo#109271] / [i915#2705]) [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl8/igt@kms_big_joiner@basic.html * igt@kms_ccs@pipe-c-ccs-on-another-bo: - shard-glk: NOTRUN -> [SKIP][41] ([fdo#109271]) +39 similar issues [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-glk9/igt@kms_ccs@pipe-c-ccs-on-another-bo.html - shard-skl: NOTRUN -> [SKIP][42] ([fdo#109271] / [fdo#111304]) +1 similar issue [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl6/igt@kms_ccs@pipe-c-ccs-on-another-bo.html * igt@kms_ccs@pipe-c-random-ccs-data: - shard-snb: NOTRUN -> [SKIP][43] ([fdo#109271]) +402 similar issues [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-snb7/igt@kms_ccs@pipe-c-random-ccs-data.html * igt@kms_chamelium@hdmi-hpd-storm: - shard-kbl: NOTRUN -> [SKIP][44] ([fdo#109271] / [fdo#111827]) +15 similar issues [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl2/igt@kms_chamelium@hdmi-hpd-storm.html * igt@kms_chamelium@vga-hpd-without-ddc: - shard-iclb: NOTRUN -> [SKIP][45] ([fdo#109284] / [fdo#111827]) +1 similar issue [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb2/igt@kms_chamelium@vga-hpd-without-ddc.html - shard-tglb: NOTRUN -> [SKIP][46] ([fdo#109284] / [fdo#111827]) +1 similar issue [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb3/igt@kms_chamelium@vga-hpd-without-ddc.html - shard-glk: NOTRUN -> [SKIP][47] ([fdo#109271] / [fdo#111827]) +1 similar issue [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-glk9/igt@kms_chamelium@vga-hpd-without-ddc.html * igt@kms_color_chamelium@pipe-a-ctm-blue-to-red: - shard-snb: NOTRUN -> [SKIP][48] ([fdo#109271] / [fdo#111827]) +22 similar issues [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-snb2/igt@kms_color_chamelium@pipe-a-ctm-blue-to-red.html * igt@kms_color_chamelium@pipe-d-degamma: - shard-skl: NOTRUN -> [SKIP][49] ([fdo#109271] / [fdo#111827]) +9 similar issues [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl10/igt@kms_color_chamelium@pipe-d-degamma.html * igt@kms_color_chamelium@pipe-invalid-degamma-lut-sizes: - shard-apl: NOTRUN -> [SKIP][50] ([fdo#109271] / [fdo#111827]) +22 similar issues [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl3/igt@kms_color_chamelium@pipe-invalid-degamma-lut-sizes.html * igt@kms_content_protection@atomic-dpms: - shard-apl: NOTRUN -> [TIMEOUT][51] ([i915#1319]) [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl3/igt@kms_content_protection@atomic-dpms.html - shard-kbl: NOTRUN -> [TIMEOUT][52] ([i915#1319]) [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl6/igt@kms_content_protection@atomic-dpms.html * igt@kms_content_protection@uevent: - shard-kbl: NOTRUN -> [FAIL][53] ([i915#2105]) [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl2/igt@kms_content_protection@uevent.html * igt@kms_cursor_crc@pipe-b-cursor-128x128-random: - shard-skl: [PASS][54] -> [FAIL][55] ([i915#54]) +1 similar issue [54]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-skl1/igt@kms_cursor_crc@pipe-b-cursor-128x128-random.html [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl5/igt@kms_cursor_crc@pipe-b-cursor-128x128-random.html * igt@kms_cursor_crc@pipe-b-cursor-32x10-random: - shard-tglb: NOTRUN -> [SKIP][56] ([i915#3359]) +1 similar issue [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb7/igt@kms_cursor_crc@pipe-b-cursor-32x10-random.html * igt@kms_cursor_crc@pipe-b-cursor-512x170-random: - shard-iclb: NOTRUN -> [SKIP][57] ([fdo#109278] / [fdo#109279]) [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb4/igt@kms_cursor_crc@pipe-b-cursor-512x170-random.html * igt@kms_cursor_crc@pipe-b-cursor-suspend: - shard-apl: NOTRUN -> [FAIL][58] ([i915#54]) [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl7/igt@kms_cursor_crc@pipe-b-cursor-suspend.html - shard-kbl: NOTRUN -> [FAIL][59] ([i915#54]) [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl4/igt@kms_cursor_crc@pipe-b-cursor-suspend.html * igt@kms_cursor_crc@pipe-d-cursor-32x32-rapid-movement: - shard-tglb: NOTRUN -> [SKIP][60] ([i915#3319]) [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb6/igt@kms_cursor_crc@pipe-d-cursor-32x32-rapid-movement.html * igt@kms_cursor_crc@pipe-d-cursor-512x512-onscreen: - shard-tglb: NOTRUN -> [SKIP][61] ([fdo#109279] / [i915#3359]) +1 similar issue [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb1/igt@kms_cursor_crc@pipe-d-cursor-512x512-onscreen.html * igt@kms_cursor_edge_walk@pipe-d-256x256-left-edge: - shard-iclb: NOTRUN -> [SKIP][62] ([fdo#109278]) +11 similar issues [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb1/igt@kms_cursor_edge_walk@pipe-d-256x256-left-edge.html * igt@kms_cursor_legacy@2x-long-flip-vs-cursor-atomic: - shard-iclb: NOTRUN -> [SKIP][63] ([fdo#109274] / [fdo#109278]) +1 similar issue [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb7/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-atomic.html * igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy: - shard-glk: [PASS][64] -> [FAIL][65] ([i915#72]) [64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-glk3/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy.html [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-glk7/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy.html * igt@kms_cursor_legacy@pipe-d-single-bo: - shard-kbl: NOTRUN -> [SKIP][66] ([fdo#109271] / [i915#533]) [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl1/igt@kms_cursor_legacy@pipe-d-single-bo.html * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile: - shard-apl: NOTRUN -> [SKIP][67] ([fdo#109271] / [i915#2642]) [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile.html * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs: - shard-kbl: NOTRUN -> [SKIP][68] ([fdo#109271] / [i915#2672]) [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl2/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs.html * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-blt: - shard-kbl: NOTRUN -> [SKIP][69] ([fdo#109271]) +218 similar issues [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl1/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-blt.html * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-onoff: - shard-tglb: NOTRUN -> [SKIP][70] ([fdo#111825]) +11 similar issues [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb5/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-onoff.html * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-pwrite: - shard-skl: NOTRUN -> [SKIP][71] ([fdo#109271]) +137 similar issues [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl3/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-pwrite.html * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-shrfb-plflip-blt: - shard-iclb: NOTRUN -> [SKIP][72] ([fdo#109280]) +9 similar issues [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb7/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-shrfb-plflip-blt.html * igt@kms_hdr@bpc-switch-suspend: - shard-skl: NOTRUN -> [FAIL][73] ([i915#1188]) [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl1/igt@kms_hdr@bpc-switch-suspend.html * igt@kms_pipe_b_c_ivb@from-pipe-c-to-b-with-3-lanes: - shard-iclb: NOTRUN -> [SKIP][74] ([fdo#109289]) +2 similar issues [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb5/igt@kms_pipe_b_c_ivb@from-pipe-c-to-b-with-3-lanes.html - shard-tglb: NOTRUN -> [SKIP][75] ([fdo#109289]) +2 similar issues [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb2/igt@kms_pipe_b_c_ivb@from-pipe-c-to-b-with-3-lanes.html * igt@kms_pipe_crc_basic@hang-read-crc-pipe-d: - shard-skl: NOTRUN -> [SKIP][76] ([fdo#109271] / [i915#533]) [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl1/igt@kms_pipe_crc_basic@hang-read-crc-pipe-d.html * igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence: - shard-apl: NOTRUN -> [SKIP][77] ([fdo#109271] / [i915#533]) +1 similar issue [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl8/igt@kms_pipe_crc_basic@nonblocking-crc-pipe-d-frame-sequence.html * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes: - shard-kbl: [PASS][78] -> [DMESG-WARN][79] ([i915#180] / [i915#533]) [78]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-kbl2/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl7/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-a-planes.html * igt@kms_plane@plane-position-covered-pipe-b-planes: - shard-skl: [PASS][80] -> [SKIP][81] ([fdo#109271]) +18 similar issues [80]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-skl7/igt@kms_plane@plane-position-covered-pipe-b-planes.html [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl8/igt@kms_plane@plane-position-covered-pipe-b-planes.html * igt@kms_plane_alpha_blend@pipe-a-alpha-7efc: - shard-kbl: NOTRUN -> [FAIL][82] ([fdo#108145] / [i915#265]) +3 similar issues [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl4/igt@kms_plane_alpha_blend@pipe-a-alpha-7efc.html * igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb: - shard-skl: NOTRUN -> [FAIL][83] ([i915#265]) [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl8/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html - shard-apl: NOTRUN -> [FAIL][84] ([i915#265]) [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl7/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html - shard-glk: NOTRUN -> [FAIL][85] ([i915#265]) [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-glk2/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html - shard-kbl: NOTRUN -> [FAIL][86] ([i915#265]) [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl4/igt@kms_plane_alpha_blend@pipe-a-alpha-transparent-fb.html * igt@kms_plane_alpha_blend@pipe-b-alpha-7efc: - shard-apl: NOTRUN -> [FAIL][87] ([fdo#108145] / [i915#265]) +4 similar issues [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl7/igt@kms_plane_alpha_blend@pipe-b-alpha-7efc.html * igt@kms_plane_alpha_blend@pipe-b-coverage-7efc: - shard-skl: [PASS][88] -> [FAIL][89] ([fdo#108145] / [i915#265]) [88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-skl1/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl5/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html * igt@kms_plane_alpha_blend@pipe-c-alpha-opaque-fb: - shard-skl: NOTRUN -> [FAIL][90] ([fdo#108145] / [i915#265]) +1 similar issue [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl1/igt@kms_plane_alpha_blend@pipe-c-alpha-opaque-fb.html - shard-glk: NOTRUN -> [FAIL][91] ([fdo#108145] / [i915#265]) [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-glk7/igt@kms_plane_alpha_blend@pipe-c-alpha-opaque-fb.html * igt@kms_plane_multiple@atomic-pipe-a-tiling-yf: - shard-tglb: NOTRUN -> [SKIP][92] ([fdo#111615]) [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb6/igt@kms_plane_multiple@atomic-pipe-a-tiling-yf.html * igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3: - shard-apl: NOTRUN -> [SKIP][93] ([fdo#109271] / [i915#658]) +4 similar issues [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl7/igt@kms_psr2_sf@overlay-primary-update-sf-dmg-area-3.html * igt@kms_psr2_sf@plane-move-sf-dmg-area-0: - shard-iclb: NOTRUN -> [SKIP][94] ([i915#2920]) [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb2/igt@kms_psr2_sf@plane-move-sf-dmg-area-0.html - shard-glk: NOTRUN -> [SKIP][95] ([fdo#109271] / [i915#658]) [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-glk9/igt@kms_psr2_sf@plane-move-sf-dmg-area-0.html * igt@kms_psr2_su@frontbuffer: - shard-iclb: [PASS][96] -> [SKIP][97] ([fdo#109642] / [fdo#111068] / [i915#658]) [96]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-iclb2/igt@kms_psr2_su@frontbuffer.html [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb1/igt@kms_psr2_su@frontbuffer.html * igt@kms_psr2_su@page_flip: - shard-kbl: NOTRUN -> [SKIP][98] ([fdo#109271] / [i915#658]) +2 similar issues [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl6/igt@kms_psr2_su@page_flip.html - shard-skl: NOTRUN -> [SKIP][99] ([fdo#109271] / [i915#658]) +1 similar issue [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl1/igt@kms_psr2_su@page_flip.html * igt@kms_psr@psr2_primary_mmap_cpu: - shard-iclb: NOTRUN -> [SKIP][100] ([fdo#109441]) +1 similar issue [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb1/igt@kms_psr@psr2_primary_mmap_cpu.html - shard-tglb: NOTRUN -> [FAIL][101] ([i915#132]) +1 similar issue [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb3/igt@kms_psr@psr2_primary_mmap_cpu.html * igt@kms_psr@psr2_primary_mmap_gtt: - shard-iclb: [PASS][102] -> [SKIP][103] ([fdo#109441]) +1 similar issue [102]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-iclb2/igt@kms_psr@psr2_primary_mmap_gtt.html [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb5/igt@kms_psr@psr2_primary_mmap_gtt.html * igt@kms_vblank@pipe-a-accuracy-idle: - shard-skl: NOTRUN -> [FAIL][104] ([i915#43]) [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl2/igt@kms_vblank@pipe-a-accuracy-idle.html * igt@kms_writeback@writeback-fb-id: - shard-apl: NOTRUN -> [SKIP][105] ([fdo#109271] / [i915#2437]) [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl7/igt@kms_writeback@writeback-fb-id.html * igt@kms_writeback@writeback-invalid-parameters: - shard-skl: NOTRUN -> [SKIP][106] ([fdo#109271] / [i915#2437]) [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl4/igt@kms_writeback@writeback-invalid-parameters.html - shard-kbl: NOTRUN -> [SKIP][107] ([fdo#109271] / [i915#2437]) [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl6/igt@kms_writeback@writeback-invalid-parameters.html * igt@perf@polling: - shard-skl: [PASS][108] -> [FAIL][109] ([i915#1542]) +1 similar issue [108]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-skl4/igt@perf@polling.html [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl7/igt@perf@polling.html * igt@prime_nv_api@i915_nv_double_export: - shard-iclb: NOTRUN -> [SKIP][110] ([fdo#109291]) +1 similar issue [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb4/igt@prime_nv_api@i915_nv_double_export.html * igt@prime_nv_api@nv_self_import: - shard-tglb: NOTRUN -> [SKIP][111] ([fdo#109291]) +1 similar issue [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb7/igt@prime_nv_api@nv_self_import.html * igt@sysfs_clients@busy: - shard-skl: NOTRUN -> [SKIP][112] ([fdo#109271] / [i915#2994]) +1 similar issue [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl8/igt@sysfs_clients@busy.html * igt@sysfs_clients@fair-1: - shard-glk: NOTRUN -> [SKIP][113] ([fdo#109271] / [i915#2994]) [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-glk6/igt@sysfs_clients@fair-1.html - shard-iclb: NOTRUN -> [SKIP][114] ([i915#2994]) [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb8/igt@sysfs_clients@fair-1.html - shard-tglb: NOTRUN -> [SKIP][115] ([i915#2994]) [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb7/igt@sysfs_clients@fair-1.html - shard-kbl: NOTRUN -> [SKIP][116] ([fdo#109271] / [i915#2994]) +1 similar issue [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl7/igt@sysfs_clients@fair-1.html * igt@sysfs_clients@sema-50: - shard-apl: NOTRUN -> [SKIP][117] ([fdo#109271] / [i915#2994]) +3 similar issues [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl2/igt@sysfs_clients@sema-50.html #### Possible fixes #### * igt@gem_create@create-clear: - shard-iclb: [FAIL][118] ([i915#3160]) -> [PASS][119] [118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-iclb1/igt@gem_create@create-clear.html [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-iclb1/igt@gem_create@create-clear.html * igt@gem_exec_fair@basic-none-share@rcs0: - shard-tglb: [FAIL][120] ([i915#2842]) -> [PASS][121] [120]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-tglb7/igt@gem_exec_fair@basic-none-share@rcs0.html [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb2/igt@gem_exec_fair@basic-none-share@rcs0.html * igt@gem_exec_fair@basic-none-vip@rcs0: - shard-glk: [FAIL][122] ([i915#2842]) -> [PASS][123] [122]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-glk4/igt@gem_exec_fair@basic-none-vip@rcs0.html [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-glk7/igt@gem_exec_fair@basic-none-vip@rcs0.html * igt@gem_exec_fair@basic-pace-share@rcs0: - shard-kbl: [FAIL][124] ([i915#2842]) -> [PASS][125] [124]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-kbl2/igt@gem_exec_fair@basic-pace-share@rcs0.html [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl4/igt@gem_exec_fair@basic-pace-share@rcs0.html * igt@gem_mmap_gtt@cpuset-basic-small-copy-xy: - shard-glk: [FAIL][126] ([i915#307]) -> [PASS][127] [126]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-glk4/igt@gem_mmap_gtt@cpuset-basic-small-copy-xy.html [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-glk4/igt@gem_mmap_gtt@cpuset-basic-small-copy-xy.html * igt@i915_pm_dc@dc9-dpms: - shard-apl: [SKIP][128] ([fdo#109271]) -> [PASS][129] [128]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-apl7/igt@i915_pm_dc@dc9-dpms.html [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-apl3/igt@i915_pm_dc@dc9-dpms.html * igt@kms_async_flips@alternate-sync-async-flip: - shard-skl: [FAIL][130] ([i915#2521]) -> [PASS][131] [130]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-skl3/igt@kms_async_flips@alternate-sync-async-flip.html [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl1/igt@kms_async_flips@alternate-sync-async-flip.html * igt@kms_big_fb@x-tiled-64bpp-rotate-180: - shard-tglb: [FAIL][132] -> [PASS][133] [132]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-tglb6/igt@kms_big_fb@x-tiled-64bpp-rotate-180.html [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-tglb1/igt@kms_big_fb@x-tiled-64bpp-rotate-180.html * igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen: - shard-skl: [FAIL][134] ([i915#54]) -> [PASS][135] [134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-skl1/igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen.html [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-skl9/igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen.html - shard-kbl: [FAIL][136] ([i915#54]) -> [PASS][137] [136]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-kbl6/igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen.html [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/shard-kbl4/igt@kms_cursor_crc@pipe-a-cursor-256x85-onscreen.html - shard-glk: [FAIL][138] ([i915#54]) -> [PASS][139] [138]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10023/shard-glk6/igt@kms_cursor_crc@pipe- == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20025/index.html [-- Attachment #1.2: Type: text/html, Size: 33781 bytes --] [-- Attachment #2: Type: text/plain, Size: 160 bytes --] _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Intel-gfx] [PATCH 0/1] Add support for querying engine cycles @ 2021-05-04 0:12 Umesh Nerlige Ramappa 0 siblings, 0 replies; 19+ messages in thread From: Umesh Nerlige Ramappa @ 2021-05-04 0:12 UTC (permalink / raw) To: intel-gfx; +Cc: dri-devel This is just a refresh of the earlier patch along with cover letter for the IGT testing. The query provides the engine cs cycles counter. v2: Use GRAPHICS_VER() instead of IG_GEN() v3: Add R-b to the patch v4: Split cpu timestamp array into timestamp and delta for cleaner API v5: Add width of the cs cycles to the uapi Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Test-with: 20210504001003.69445-1-umesh.nerlige.ramappa@intel.com Umesh Nerlige Ramappa (1): i915/query: Correlate engine and cpu timestamps with better accuracy drivers/gpu/drm/i915/i915_query.c | 157 ++++++++++++++++++++++++++++++ include/uapi/drm/i915_drm.h | 56 +++++++++++ 2 files changed, 213 insertions(+) -- 2.20.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Intel-gfx] [PATCH 0/1] Add support for querying engine cycles @ 2021-04-27 21:53 Umesh Nerlige Ramappa 0 siblings, 0 replies; 19+ messages in thread From: Umesh Nerlige Ramappa @ 2021-04-27 21:53 UTC (permalink / raw) To: intel-gfx This is just a refresh of the earlier patch along with cover letter for the IGT testing. The query provides the engine cs cycles counter. v2: Use GRAPHICS_VER() instead of IG_GEN() v3: Add R-b to the patch Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Test-with: 20210421172046.65062-1-umesh.nerlige.ramappa@intel.com Umesh Nerlige Ramappa (1): i915/query: Correlate engine and cpu timestamps with better accuracy drivers/gpu/drm/i915/i915_query.c | 145 ++++++++++++++++++++++++++++++ include/uapi/drm/i915_drm.h | 48 ++++++++++ 2 files changed, 193 insertions(+) -- 2.20.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Intel-gfx] [PATCH 0/1] Add support for querying engine cycles @ 2021-04-27 21:49 Umesh Nerlige Ramappa 0 siblings, 0 replies; 19+ messages in thread From: Umesh Nerlige Ramappa @ 2021-04-27 21:49 UTC (permalink / raw) To: intel-gfx; +Cc: Chris Wilson This is just a refresh of the earlier patch along with cover letter for the IGT testing. The query provides the engine cs cycles counter. v2: Use GRAPHICS_VER() instead of IG_GEN() Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Test-with: 20210421172046.65062-1-umesh.nerlige.ramappa@intel.com Umesh Nerlige Ramappa (1): i915/query: Correlate engine and cpu timestamps with better accuracy drivers/gpu/drm/i915/i915_query.c | 145 ++++++++++++++++++++++++++++++ include/uapi/drm/i915_drm.h | 48 ++++++++++ 2 files changed, 193 insertions(+) -- 2.20.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
* [Intel-gfx] [PATCH 0/1] Add support for querying engine cycles @ 2021-04-21 17:28 Umesh Nerlige Ramappa 0 siblings, 0 replies; 19+ messages in thread From: Umesh Nerlige Ramappa @ 2021-04-21 17:28 UTC (permalink / raw) To: intel-gfx; +Cc: Chris Wilson This is just a refresh of the earlier patch along with cover letter for the IGT testing. The query provides the engine cs cycles counter. Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Test-with: 20210421172046.65062-1-umesh.nerlige.ramappa@intel.com Umesh Nerlige Ramappa (1): i915/query: Correlate engine and cpu timestamps with better accuracy drivers/gpu/drm/i915/i915_query.c | 145 ++++++++++++++++++++++++++++++ include/uapi/drm/i915_drm.h | 48 ++++++++++ 2 files changed, 193 insertions(+) -- 2.20.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2021-05-04 0:12 UTC | newest] Thread overview: 19+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2021-04-29 0:34 [Intel-gfx] [PATCH 0/1] Add support for querying engine cycles Umesh Nerlige Ramappa 2021-04-29 0:34 ` [Intel-gfx] [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy Umesh Nerlige Ramappa 2021-04-29 8:34 ` Lionel Landwerlin 2021-04-29 19:07 ` Jason Ekstrand 2021-04-30 22:26 ` Umesh Nerlige Ramappa 2021-04-30 23:00 ` Dixit, Ashutosh 2021-04-30 23:23 ` Dixit, Ashutosh 2021-05-01 0:35 ` Jason Ekstrand 2021-05-01 2:19 ` Umesh Nerlige Ramappa 2021-05-01 4:01 ` Dixit, Ashutosh 2021-05-01 15:27 ` Jason Ekstrand 2021-05-03 18:29 ` Umesh Nerlige Ramappa 2021-04-29 1:34 ` [Intel-gfx] ✗ Fi.CI.DOCS: warning for Add support for querying engine cycles Patchwork 2021-04-29 1:59 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2021-04-29 3:26 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork -- strict thread matches above, loose matches on Subject: below -- 2021-05-04 0:12 [Intel-gfx] [PATCH 0/1] " Umesh Nerlige Ramappa 2021-04-27 21:53 Umesh Nerlige Ramappa 2021-04-27 21:49 Umesh Nerlige Ramappa 2021-04-21 17:28 Umesh Nerlige Ramappa
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox