* [PATCH v2 0/3] lib/intel: switch graphics/IP version queries to fd-based APIs
@ 2026-01-22 7:15 Xin Wang
2026-01-22 7:15 ` [PATCH v2 1/3] lib/intel: suffix PCI ID based gen/graphics_ver with _legacy Xin Wang
` (5 more replies)
0 siblings, 6 replies; 12+ messages in thread
From: Xin Wang @ 2026-01-22 7:15 UTC (permalink / raw)
To: igt-dev; +Cc: Xin Wang
This series separates PCI ID–based device traits from per‑device IP version
queries. It introduces fd‑based intel_gen()/intel_graphics_ver() using Xe
query data when available, keeps PCI ID translation as _legacy, and
updates Xe‑side libs/tests to use the fd‑based APIs. This aligns IGT with
post‑MTL IP disaggregation while preserving i915 safety fallback.
V2:
- Rebased on latest master
- Enhanced patch 1 commit message to clarify it's a preparatory step
Xin Wang (3):
lib/intel: suffix PCI ID based gen/graphics_ver with _legacy
lib/intel: add fd-based intel_gen/intel_graphics_ver via Xe query
intel/xe: use fd-based graphics/IP version helpers
benchmarks/gem_blt.c | 2 +-
benchmarks/gem_busy.c | 2 +-
benchmarks/gem_latency.c | 2 +-
benchmarks/gem_wsim.c | 8 ++--
benchmarks/intel_upload_blit_large.c | 2 +-
benchmarks/intel_upload_blit_large_gtt.c | 2 +-
benchmarks/intel_upload_blit_large_map.c | 2 +-
benchmarks/intel_upload_blit_small.c | 2 +-
lib/gpgpu_shader.c | 5 +--
lib/gpu_cmds.c | 21 +++++-----
lib/i915/gem_engine_topology.c | 6 +--
lib/i915/gem_mman.c | 2 +-
lib/i915/gem_submission.c | 8 ++--
lib/i915/i915_crc.c | 4 +-
lib/i915/intel_decode.c | 4 +-
lib/igt_dummyload.c | 2 +-
lib/igt_gt.c | 4 +-
lib/igt_store.c | 2 +-
lib/instdone.c | 2 +-
lib/intel_batchbuffer.c | 18 ++++----
lib/intel_blt.c | 21 ++++------
lib/intel_blt.h | 2 +-
lib/intel_bufops.c | 10 ++---
lib/intel_chipset.c | 51 +++++++++++++++++++++++
lib/intel_chipset.h | 14 ++++---
lib/intel_common.c | 2 +-
lib/intel_compute.c | 6 +--
lib/intel_device_info.c | 6 +--
lib/intel_mmio.c | 8 ++--
lib/intel_mocs.c | 48 +++++++++++-----------
lib/intel_pat.c | 19 ++++-----
lib/intel_reg_map.c | 2 +-
lib/ioctl_wrappers.c | 2 +-
lib/rendercopy_gen9.c | 22 +++++-----
lib/xe/xe_legacy.c | 2 +-
lib/xe/xe_oa.c | 4 +-
lib/xe/xe_query.c | 25 ++++++++++++
lib/xe/xe_query.h | 1 +
lib/xe/xe_spin.c | 4 +-
lib/xe/xe_sriov_provisioning.c | 4 +-
tests/intel/api_intel_allocator.c | 2 +-
tests/intel/api_intel_bb.c | 10 ++---
tests/intel/gem_bad_reloc.c | 4 +-
tests/intel/gem_blits.c | 2 +-
tests/intel/gem_close_race.c | 2 +-
tests/intel/gem_concurrent_all.c | 2 +-
tests/intel/gem_ctx_create.c | 4 +-
tests/intel/gem_ctx_engines.c | 6 +--
tests/intel/gem_ctx_isolation.c | 14 +++----
tests/intel/gem_ctx_shared.c | 8 ++--
tests/intel/gem_ctx_sseu.c | 2 +-
tests/intel/gem_eio.c | 6 +--
tests/intel/gem_evict_alignment.c | 6 +--
tests/intel/gem_evict_everything.c | 8 ++--
tests/intel/gem_exec_async.c | 2 +-
tests/intel/gem_exec_await.c | 2 +-
tests/intel/gem_exec_balancer.c | 4 +-
tests/intel/gem_exec_big.c | 2 +-
tests/intel/gem_exec_capture.c | 6 +--
tests/intel/gem_exec_fair.c | 20 ++++-----
tests/intel/gem_exec_fence.c | 18 ++++----
tests/intel/gem_exec_flush.c | 4 +-
tests/intel/gem_exec_gttfill.c | 2 +-
tests/intel/gem_exec_latency.c | 6 +--
tests/intel/gem_exec_nop.c | 4 +-
tests/intel/gem_exec_parallel.c | 2 +-
tests/intel/gem_exec_params.c | 8 ++--
tests/intel/gem_exec_reloc.c | 10 ++---
tests/intel/gem_exec_schedule.c | 20 ++++-----
tests/intel/gem_exec_store.c | 6 +--
tests/intel/gem_exec_suspend.c | 2 +-
tests/intel/gem_exec_whisper.c | 6 +--
tests/intel/gem_fenced_exec_thrash.c | 2 +-
tests/intel/gem_gtt_hog.c | 2 +-
tests/intel/gem_linear_blits.c | 8 ++--
tests/intel/gem_media_vme.c | 2 +-
tests/intel/gem_mmap_gtt.c | 12 +++---
tests/intel/gem_read_read_speed.c | 2 +-
tests/intel/gem_render_copy.c | 8 ++--
tests/intel/gem_ringfill.c | 4 +-
tests/intel/gem_set_tiling_vs_blt.c | 2 +-
tests/intel/gem_softpin.c | 6 +--
tests/intel/gem_streaming_writes.c | 4 +-
tests/intel/gem_sync.c | 8 ++--
tests/intel/gem_tiled_fence_blits.c | 4 +-
tests/intel/gem_tiling_max_stride.c | 8 ++--
tests/intel/gem_userptr_blits.c | 20 ++++-----
tests/intel/gem_vm_create.c | 2 +-
tests/intel/gem_watchdog.c | 4 +-
tests/intel/gem_workarounds.c | 2 +-
tests/intel/gen7_exec_parse.c | 2 +-
tests/intel/gen9_exec_parse.c | 2 +-
tests/intel/i915_getparams_basic.c | 6 +--
tests/intel/i915_module_load.c | 2 +-
tests/intel/i915_pm_rc6_residency.c | 6 +--
tests/intel/i915_pm_rpm.c | 2 +-
tests/intel/i915_pm_sseu.c | 2 +-
tests/intel/kms_ccs.c | 13 ++----
tests/intel/kms_fbcon_fbt.c | 2 +-
tests/intel/kms_frontbuffer_tracking.c | 6 +--
tests/intel/kms_pipe_stress.c | 4 +-
tests/intel/perf.c | 52 ++++++++++++------------
tests/intel/perf_pmu.c | 8 ++--
tests/intel/sysfs_preempt_timeout.c | 2 +-
tests/intel/sysfs_timeslice_duration.c | 2 +-
tests/intel/xe_ccs.c | 16 ++++----
tests/intel/xe_compute.c | 8 ++--
tests/intel/xe_copy_basic.c | 6 +--
tests/intel/xe_debugfs.c | 3 +-
tests/intel/xe_eudebug_online.c | 8 +---
tests/intel/xe_exec_multi_queue.c | 2 +-
tests/intel/xe_exec_store.c | 18 ++++----
tests/intel/xe_fault_injection.c | 8 ++--
tests/intel/xe_intel_bb.c | 7 ++--
tests/intel/xe_multigpu_svm.c | 3 +-
tests/intel/xe_oa.c | 22 +++++-----
tests/intel/xe_pat.c | 16 +++-----
tests/intel/xe_query.c | 4 +-
tests/intel/xe_render_copy.c | 2 +-
tests/prime_vgem.c | 2 +-
tools/intel_dp_compliance.c | 2 +-
tools/intel_error_decode.c | 12 +++---
tools/intel_gtt.c | 12 +++---
tools/intel_l3_parity.c | 2 +-
tools/intel_reg.c | 6 +--
tools/intel_reg_decode.c | 4 +-
tools/intel_tiling_detect.c | 2 +-
tools/intel_vbt_decode.c | 2 +-
128 files changed, 486 insertions(+), 445 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 12+ messages in thread* [PATCH v2 1/3] lib/intel: suffix PCI ID based gen/graphics_ver with _legacy 2026-01-22 7:15 [PATCH v2 0/3] lib/intel: switch graphics/IP version queries to fd-based APIs Xin Wang @ 2026-01-22 7:15 ` Xin Wang 2026-02-04 18:30 ` Matt Roper 2026-01-22 7:15 ` [PATCH v2 2/3] lib/intel: add fd-based intel_gen/intel_graphics_ver via Xe query Xin Wang ` (4 subsequent siblings) 5 siblings, 1 reply; 12+ messages in thread From: Xin Wang @ 2026-01-22 7:15 UTC (permalink / raw) To: igt-dev Cc: Xin Wang, Kamil Konieczny, Matt Roper, Zbigniew Kempczyński, Ravi Kumar V Rename the PCI ID translation helpers to intel_gen_legacy() and intel_graphics_ver_legacy() across callers. This is a preparatory step for introducing fd-based APIs. Cc: Kamil Konieczny <kamil.konieczny@linux.intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> Cc: Ravi Kumar V <ravi.kumar.vodapalli@intel.com> Signed-off-by: Xin Wang <x.wang@intel.com> --- benchmarks/gem_blt.c | 2 +- benchmarks/gem_busy.c | 2 +- benchmarks/gem_latency.c | 2 +- benchmarks/gem_wsim.c | 8 ++-- benchmarks/intel_upload_blit_large.c | 2 +- benchmarks/intel_upload_blit_large_gtt.c | 2 +- benchmarks/intel_upload_blit_large_map.c | 2 +- benchmarks/intel_upload_blit_small.c | 2 +- lib/gpu_cmds.c | 20 ++++----- lib/i915/gem_engine_topology.c | 6 +-- lib/i915/gem_mman.c | 2 +- lib/i915/gem_submission.c | 8 ++-- lib/i915/i915_crc.c | 4 +- lib/i915/intel_decode.c | 4 +- lib/igt_dummyload.c | 2 +- lib/igt_gt.c | 4 +- lib/igt_store.c | 2 +- lib/instdone.c | 2 +- lib/intel_batchbuffer.c | 14 +++---- lib/intel_blt.c | 18 ++++---- lib/intel_blt.h | 2 +- lib/intel_bufops.c | 10 ++--- lib/intel_chipset.h | 12 +++--- lib/intel_common.c | 2 +- lib/intel_compute.c | 6 +-- lib/intel_device_info.c | 6 +-- lib/intel_mmio.c | 8 ++-- lib/intel_mocs.c | 4 +- lib/intel_pat.c | 8 ++-- lib/intel_reg_map.c | 2 +- lib/ioctl_wrappers.c | 2 +- lib/rendercopy_gen9.c | 22 +++++----- lib/xe/xe_legacy.c | 2 +- lib/xe/xe_oa.c | 4 +- lib/xe/xe_spin.c | 2 +- lib/xe/xe_sriov_provisioning.c | 2 +- tests/intel/api_intel_allocator.c | 2 +- tests/intel/api_intel_bb.c | 10 ++--- tests/intel/gem_bad_reloc.c | 4 +- tests/intel/gem_blits.c | 2 +- tests/intel/gem_close_race.c | 2 +- tests/intel/gem_concurrent_all.c | 2 +- tests/intel/gem_ctx_create.c | 4 +- tests/intel/gem_ctx_engines.c | 6 +-- tests/intel/gem_ctx_isolation.c | 14 +++---- tests/intel/gem_ctx_shared.c | 8 ++-- tests/intel/gem_ctx_sseu.c | 2 +- tests/intel/gem_eio.c | 6 +-- tests/intel/gem_evict_alignment.c | 6 +-- tests/intel/gem_evict_everything.c | 8 ++-- tests/intel/gem_exec_async.c | 2 +- tests/intel/gem_exec_await.c | 2 +- tests/intel/gem_exec_balancer.c | 4 +- tests/intel/gem_exec_big.c | 2 +- tests/intel/gem_exec_capture.c | 6 +-- tests/intel/gem_exec_fair.c | 20 ++++----- tests/intel/gem_exec_fence.c | 18 ++++---- tests/intel/gem_exec_flush.c | 4 +- tests/intel/gem_exec_gttfill.c | 2 +- tests/intel/gem_exec_latency.c | 6 +-- tests/intel/gem_exec_nop.c | 4 +- tests/intel/gem_exec_parallel.c | 2 +- tests/intel/gem_exec_params.c | 8 ++-- tests/intel/gem_exec_reloc.c | 10 ++--- tests/intel/gem_exec_schedule.c | 20 ++++----- tests/intel/gem_exec_store.c | 6 +-- tests/intel/gem_exec_suspend.c | 2 +- tests/intel/gem_exec_whisper.c | 6 +-- tests/intel/gem_fenced_exec_thrash.c | 2 +- tests/intel/gem_gtt_hog.c | 2 +- tests/intel/gem_linear_blits.c | 8 ++-- tests/intel/gem_media_vme.c | 2 +- tests/intel/gem_mmap_gtt.c | 12 +++--- tests/intel/gem_read_read_speed.c | 2 +- tests/intel/gem_render_copy.c | 8 ++-- tests/intel/gem_ringfill.c | 4 +- tests/intel/gem_set_tiling_vs_blt.c | 2 +- tests/intel/gem_softpin.c | 6 +-- tests/intel/gem_streaming_writes.c | 4 +- tests/intel/gem_sync.c | 8 ++-- tests/intel/gem_tiled_fence_blits.c | 4 +- tests/intel/gem_tiling_max_stride.c | 8 ++-- tests/intel/gem_userptr_blits.c | 20 ++++----- tests/intel/gem_vm_create.c | 2 +- tests/intel/gem_watchdog.c | 4 +- tests/intel/gem_workarounds.c | 2 +- tests/intel/gen7_exec_parse.c | 2 +- tests/intel/gen9_exec_parse.c | 2 +- tests/intel/i915_getparams_basic.c | 6 +-- tests/intel/i915_module_load.c | 2 +- tests/intel/i915_pm_rc6_residency.c | 6 +-- tests/intel/i915_pm_rpm.c | 2 +- tests/intel/i915_pm_sseu.c | 2 +- tests/intel/kms_ccs.c | 8 ++-- tests/intel/kms_fbcon_fbt.c | 2 +- tests/intel/kms_frontbuffer_tracking.c | 6 +-- tests/intel/kms_pipe_stress.c | 4 +- tests/intel/perf.c | 52 ++++++++++++------------ tests/intel/perf_pmu.c | 8 ++-- tests/intel/sysfs_preempt_timeout.c | 2 +- tests/intel/sysfs_timeslice_duration.c | 2 +- tests/intel/xe_ccs.c | 16 ++++---- tests/intel/xe_compute.c | 8 ++-- tests/intel/xe_debugfs.c | 2 +- tests/intel/xe_eudebug_online.c | 4 +- tests/intel/xe_exec_multi_queue.c | 2 +- tests/intel/xe_exec_store.c | 2 +- tests/intel/xe_fault_injection.c | 2 +- tests/intel/xe_intel_bb.c | 6 +-- tests/intel/xe_multigpu_svm.c | 2 +- tests/intel/xe_oa.c | 22 +++++----- tests/intel/xe_pat.c | 6 +-- tests/intel/xe_query.c | 2 +- tests/intel/xe_render_copy.c | 2 +- tests/prime_vgem.c | 2 +- tools/intel_dp_compliance.c | 2 +- tools/intel_error_decode.c | 12 +++--- tools/intel_gtt.c | 12 +++--- tools/intel_l3_parity.c | 2 +- tools/intel_reg.c | 6 +-- tools/intel_reg_decode.c | 4 +- tools/intel_tiling_detect.c | 2 +- tools/intel_vbt_decode.c | 2 +- 123 files changed, 364 insertions(+), 364 deletions(-) diff --git a/benchmarks/gem_blt.c b/benchmarks/gem_blt.c index bd8264b4e..525de5adf 100644 --- a/benchmarks/gem_blt.c +++ b/benchmarks/gem_blt.c @@ -190,7 +190,7 @@ static int run(int object, int batch, int time, int reps, int ncpus, unsigned fl handle = gem_create(fd, size); buf = gem_mmap__cpu(fd, handle, 0, size, PROT_WRITE); - gen = intel_gen(intel_get_drm_devid(fd)); + gen = intel_gen_legacy(intel_get_drm_devid(fd)); has_64bit_reloc = gen >= 8; src = gem_create(fd, ALIGN(object, 4096)); diff --git a/benchmarks/gem_busy.c b/benchmarks/gem_busy.c index 95d0fb971..aa2c6a38e 100644 --- a/benchmarks/gem_busy.c +++ b/benchmarks/gem_busy.c @@ -155,7 +155,7 @@ static int loop(unsigned ring, int reps, int ncpus, unsigned flags) shared = mmap(0, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0); fd = drm_open_driver(DRIVER_INTEL); - gen = intel_gen(intel_get_drm_devid(fd)); + gen = intel_gen_legacy(intel_get_drm_devid(fd)); memset(obj, 0, sizeof(obj)); obj[0].handle = gem_create(fd, 4096); diff --git a/benchmarks/gem_latency.c b/benchmarks/gem_latency.c index b4e2afbf5..9ccbec740 100644 --- a/benchmarks/gem_latency.c +++ b/benchmarks/gem_latency.c @@ -452,7 +452,7 @@ static int run(int seconds, #endif fd = drm_open_driver(DRIVER_INTEL); - gen = intel_gen(intel_get_drm_devid(fd)); + gen = intel_gen_legacy(intel_get_drm_devid(fd)); if (gen < 6) return IGT_EXIT_SKIP; /* Needs BCS timestamp */ diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c index bebb59f28..ef3e8f265 100644 --- a/benchmarks/gem_wsim.c +++ b/benchmarks/gem_wsim.c @@ -358,7 +358,7 @@ static uint64_t ns_to_ctx_ticks(uint64_t ns) if (!f) { f = read_timestamp_frequency(fd); - if (intel_gen(intel_get_drm_devid(fd)) == 11) + if (intel_gen_legacy(intel_get_drm_devid(fd)) == 11) f = 12500000; /* icl!!! are you feeling alright? */ } @@ -936,7 +936,7 @@ parse_duration(unsigned int nr_steps, struct duration *dur, double scale_dur, ch long tmpl; if (field[0] == '*') { - if (intel_gen(intel_get_drm_devid(fd)) < 8) { + if (intel_gen_legacy(intel_get_drm_devid(fd)) < 8) { wsim_err("Infinite batch at step %u needs Gen8+!\n", nr_steps); return -1; } @@ -1536,7 +1536,7 @@ static uint32_t mmio_base(int i915, const intel_engine_t *engine, int gen) static unsigned int create_bb(struct w_step *w, int self) { - const int gen = intel_gen(intel_get_drm_devid(fd)); + const int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const uint32_t base = mmio_base(fd, &w->engine, gen); #define CS_GPR(x) (base + 0x600 + 8 * (x)) #define TIMESTAMP (base + 0x3a8) @@ -2138,7 +2138,7 @@ static int prepare_contexts(unsigned int id, struct workload *wrk) wsim_err("Load balancing needs an engine map!\n"); return 1; } - if (intel_gen(intel_get_drm_devid(fd)) < 11) { + if (intel_gen_legacy(intel_get_drm_devid(fd)) < 11) { wsim_err("Load balancing needs relative mmio support, gen11+!\n"); return 1; } diff --git a/benchmarks/intel_upload_blit_large.c b/benchmarks/intel_upload_blit_large.c index af52d7a4e..6e0e9d964 100644 --- a/benchmarks/intel_upload_blit_large.c +++ b/benchmarks/intel_upload_blit_large.c @@ -82,7 +82,7 @@ do_render(int i915, uint32_t dst_handle) uint32_t data[OBJECT_WIDTH * OBJECT_HEIGHT]; uint64_t size = OBJECT_WIDTH * OBJECT_HEIGHT * 4, bb_size = 4096; uint32_t src_handle, bb_handle, *bb; - uint32_t gen = intel_gen(intel_get_drm_devid(i915)); + uint32_t gen = intel_gen_legacy(intel_get_drm_devid(i915)); const bool has_64b_reloc = gen >= 8; int i; diff --git a/benchmarks/intel_upload_blit_large_gtt.c b/benchmarks/intel_upload_blit_large_gtt.c index 1e991a6b2..5387b983f 100644 --- a/benchmarks/intel_upload_blit_large_gtt.c +++ b/benchmarks/intel_upload_blit_large_gtt.c @@ -78,7 +78,7 @@ do_render(int i915, uint32_t dst_handle) static uint32_t seed = 1; uint64_t size = OBJECT_WIDTH * OBJECT_HEIGHT * 4, bb_size = 4096; uint32_t *data, src_handle, bb_handle, *bb; - uint32_t gen = intel_gen(intel_get_drm_devid(i915)); + uint32_t gen = intel_gen_legacy(intel_get_drm_devid(i915)); const bool has_64b_reloc = gen >= 8; int i; diff --git a/benchmarks/intel_upload_blit_large_map.c b/benchmarks/intel_upload_blit_large_map.c index 6d3cd748c..cbf1b2f79 100644 --- a/benchmarks/intel_upload_blit_large_map.c +++ b/benchmarks/intel_upload_blit_large_map.c @@ -81,7 +81,7 @@ do_render(int i915, uint32_t dst_handle) static uint32_t seed = 1; uint64_t size = OBJECT_WIDTH * OBJECT_HEIGHT * 4, bb_size = 4096; uint32_t *data, src_handle, bb_handle, *bb; - uint32_t gen = intel_gen(intel_get_drm_devid(i915)); + uint32_t gen = intel_gen_legacy(intel_get_drm_devid(i915)); const bool has_64b_reloc = gen >= 8; int i; diff --git a/benchmarks/intel_upload_blit_small.c b/benchmarks/intel_upload_blit_small.c index 525d68e36..b830bacde 100644 --- a/benchmarks/intel_upload_blit_small.c +++ b/benchmarks/intel_upload_blit_small.c @@ -76,7 +76,7 @@ do_render(int i915, uint32_t dst_handle) uint32_t data[OBJECT_WIDTH * OBJECT_HEIGHT]; uint64_t size = OBJECT_WIDTH * OBJECT_HEIGHT * 4, bb_size = 4096; uint32_t src_handle, bb_handle, *bb; - uint32_t gen = intel_gen(intel_get_drm_devid(i915)); + uint32_t gen = intel_gen_legacy(intel_get_drm_devid(i915)); const bool has_64b_reloc = gen >= 8; int i; diff --git a/lib/gpu_cmds.c b/lib/gpu_cmds.c index a6a9247dc..ab46fe0de 100644 --- a/lib/gpu_cmds.c +++ b/lib/gpu_cmds.c @@ -320,7 +320,7 @@ fill_binding_table(struct intel_bb *ibb, struct intel_buf *buf) binding_table = intel_bb_ptr(ibb); intel_bb_ptr_add(ibb, 64); - if (intel_graphics_ver(devid) >= IP_VER(20, 0)) { + if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { /* * Up until now, SURFACEFORMAT_R8_UNROM was used regardless of the 'bpp' value. * For bpp 32 this results in a surface that is 4x narrower than expected. However @@ -342,13 +342,13 @@ fill_binding_table(struct intel_bb *ibb, struct intel_buf *buf) igt_assert_f(false, "Surface state for bpp = %u not implemented", buf->bpp); - } else if (intel_graphics_ver(devid) >= IP_VER(12, 50)) { + } else if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 50)) { binding_table[0] = xehp_fill_surface_state(ibb, buf, SURFACEFORMAT_R8_UNORM, 1); - } else if (intel_graphics_ver(devid) >= IP_VER(9, 0)) { + } else if (intel_graphics_ver_legacy(devid) >= IP_VER(9, 0)) { binding_table[0] = gen9_fill_surface_state(ibb, buf, SURFACEFORMAT_R8_UNORM, 1); - } else if (intel_graphics_ver(devid) >= IP_VER(8, 0)) { + } else if (intel_graphics_ver_legacy(devid) >= IP_VER(8, 0)) { binding_table[0] = gen8_fill_surface_state(ibb, buf, SURFACEFORMAT_R8_UNORM, 1); } else { @@ -867,7 +867,7 @@ gen_emit_media_object(struct intel_bb *ibb, /* inline data (xoffset, yoffset) */ intel_bb_out(ibb, xoffset); intel_bb_out(ibb, yoffset); - if (intel_gen(ibb->devid) >= 8 && !IS_CHERRYVIEW(ibb->devid)) + if (intel_gen_legacy(ibb->devid) >= 8 && !IS_CHERRYVIEW(ibb->devid)) gen8_emit_media_state_flush(ibb); } @@ -1011,7 +1011,7 @@ void xehp_emit_state_compute_mode(struct intel_bb *ibb, bool vrt) { - uint32_t dword_length = intel_graphics_ver(ibb->devid) >= IP_VER(20, 0); + uint32_t dword_length = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0); intel_bb_out(ibb, XEHP_STATE_COMPUTE_MODE | dword_length); intel_bb_out(ibb, vrt ? (0x10001) << 10 : 0); /* Enable variable number of threads */ @@ -1042,7 +1042,7 @@ xehp_emit_state_base_address(struct intel_bb *ibb) intel_bb_out(ibb, 0); /* stateless data port */ - tmp = intel_graphics_ver(ibb->devid) >= IP_VER(20, 0) ? 0 : BASE_ADDRESS_MODIFY; + tmp = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0) ? 0 : BASE_ADDRESS_MODIFY; intel_bb_out(ibb, 0 | tmp); //dw3 /* surface */ @@ -1068,7 +1068,7 @@ xehp_emit_state_base_address(struct intel_bb *ibb) /* dynamic state buffer size */ intel_bb_out(ibb, ALIGN(ibb->size, 1 << 12) | 1); //dw13 /* indirect object buffer size */ - if (intel_graphics_ver(ibb->devid) >= IP_VER(20, 0)) //dw14 + if (intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0)) //dw14 intel_bb_out(ibb, 0); else intel_bb_out(ibb, 0xfffff000 | 1); @@ -1115,7 +1115,7 @@ xehp_emit_compute_walk(struct intel_bb *ibb, else mask = (1 << mask) - 1; - dword_length = intel_graphics_ver(ibb->devid) >= IP_VER(20, 0) ? 0x26 : 0x25; + dword_length = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0) ? 0x26 : 0x25; intel_bb_out(ibb, XEHP_COMPUTE_WALKER | dword_length); intel_bb_out(ibb, 0); /* debug object */ //dw1 @@ -1155,7 +1155,7 @@ xehp_emit_compute_walk(struct intel_bb *ibb, intel_bb_out(ibb, 0); //dw16 intel_bb_out(ibb, 0); //dw17 - if (intel_graphics_ver(ibb->devid) >= IP_VER(20, 0)) //Xe2:dw18 + if (intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0)) //Xe2:dw18 intel_bb_out(ibb, 0); /* Interface descriptor data */ for (int i = 0; i < 8; i++) { //dw18-25 (Xe2:dw19-26) diff --git a/lib/i915/gem_engine_topology.c b/lib/i915/gem_engine_topology.c index c25106034..14e502ea5 100644 --- a/lib/i915/gem_engine_topology.c +++ b/lib/i915/gem_engine_topology.c @@ -375,7 +375,7 @@ static int gem_engine_to_gt_map(int i915, const struct i915_engine_class_instanc uint32_t devid = intel_get_drm_devid(i915); /* Only MTL multi-gt supported at present */ - igt_require(intel_graphics_ver(devid) <= IP_VER(12, 70)); + igt_require(intel_graphics_ver_legacy(devid) <= IP_VER(12, 70)); return IS_METEORLAKE(devid) ? mtl_engine_to_gt_map(engine) : 0; } @@ -644,7 +644,7 @@ bool gem_engine_can_block_copy(int i915, const struct intel_execution_engine2 *e return false; if (!gem_engine_has_known_capability(i915, engine->name, "block_copy")) - return intel_gen(intel_get_drm_devid(i915)) >= 12; + return intel_gen_legacy(intel_get_drm_devid(i915)) >= 12; return gem_engine_has_capability(i915, engine->name, "block_copy"); } @@ -655,7 +655,7 @@ uint32_t gem_engine_mmio_base(int i915, const char *engine) if (gem_engine_property_scanf(i915, engine, "mmio_base", "%x", &mmio) < 0) { - int gen = intel_gen(intel_get_drm_devid(i915)); + int gen = intel_gen_legacy(intel_get_drm_devid(i915)); /* The layout of xcs1+ is unreliable -- hence the property! */ if (!strcmp(engine, "rcs0")) { diff --git a/lib/i915/gem_mman.c b/lib/i915/gem_mman.c index cd0c65e21..3a063ab02 100644 --- a/lib/i915/gem_mman.c +++ b/lib/i915/gem_mman.c @@ -738,7 +738,7 @@ uint64_t gem_mappable_aperture_size(int fd) struct pci_device *pci_dev = igt_device_get_pci_device(fd); int bar; - if (intel_gen(pci_dev->device_id) < 3) + if (intel_gen_legacy(pci_dev->device_id) < 3) bar = 0; else bar = 2; diff --git a/lib/i915/gem_submission.c b/lib/i915/gem_submission.c index 7d1c3970f..7ac6f8a68 100644 --- a/lib/i915/gem_submission.c +++ b/lib/i915/gem_submission.c @@ -62,7 +62,7 @@ */ unsigned gem_submission_method(int fd) { - const int gen = intel_gen(intel_get_drm_devid(fd)); + const int gen = intel_gen_legacy(intel_get_drm_devid(fd)); unsigned method = GEM_SUBMISSION_RINGBUF; int dir; uint32_t value = 0; @@ -210,7 +210,7 @@ int gem_cmdparser_version(int i915) bool gem_engine_has_cmdparser(int i915, const intel_ctx_cfg_t *cfg, unsigned int engine) { - const int gen = intel_gen(intel_get_drm_devid(i915)); + const int gen = intel_gen_legacy(intel_get_drm_devid(i915)); const int parser_version = gem_cmdparser_version(i915); const int class = intel_ctx_cfg_engine_class(cfg, engine); @@ -232,7 +232,7 @@ bool gem_has_blitter(int i915) unsigned int blt; blt = 0; - if (intel_gen(intel_get_drm_devid(i915)) >= 6) + if (intel_gen_legacy(intel_get_drm_devid(i915)) >= 6) blt = I915_EXEC_BLT; return gem_has_ring(i915, blt); @@ -245,7 +245,7 @@ void gem_require_blitter(int i915) static bool gem_engine_has_immutable_submission(int i915, int class) { - const int gen = intel_gen(intel_get_drm_devid(i915)); + const int gen = intel_gen_legacy(intel_get_drm_devid(i915)); int parser_version; parser_version = gem_cmdparser_version(i915); diff --git a/lib/i915/i915_crc.c b/lib/i915/i915_crc.c index 9564b7327..d08644101 100644 --- a/lib/i915/i915_crc.c +++ b/lib/i915/i915_crc.c @@ -135,7 +135,7 @@ static void fill_batch(int i915, uint32_t bb_handle, uint64_t bb_offset, uint64_t table_offset, uint64_t data_offset, uint32_t data_size) { uint32_t *bb, *batch, *jmp; - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); const int use_64b = gen >= 8; uint64_t offset; uint64_t crc = USERDATA(table_offset, 0); @@ -294,5 +294,5 @@ bool supports_i915_crc32(int i915) { uint16_t devid = intel_get_drm_devid(i915); - return intel_graphics_ver(devid) > IP_VER(12, 50); + return intel_graphics_ver_legacy(devid) > IP_VER(12, 50); } diff --git a/lib/i915/intel_decode.c b/lib/i915/intel_decode.c index b78993c47..14cb5909f 100644 --- a/lib/i915/intel_decode.c +++ b/lib/i915/intel_decode.c @@ -3825,7 +3825,7 @@ intel_decode_context_alloc(uint32_t devid) struct intel_decode *ctx; int gen = 0; - gen = intel_gen(devid); + gen = intel_gen_legacy(devid); ctx = calloc(1, sizeof(struct intel_decode)); if (!ctx) @@ -3944,7 +3944,7 @@ intel_decode(struct intel_decode *ctx) index += decode_2d(ctx); break; case 0x3: - if (intel_gen(devid) >= 4) { + if (intel_gen_legacy(devid) >= 4) { index += decode_3d_965(ctx); } else if (IS_GEN3(devid)) { diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c index cc0b4ac3b..c7704392e 100644 --- a/lib/igt_dummyload.c +++ b/lib/igt_dummyload.c @@ -95,7 +95,7 @@ emit_recursive_batch(igt_spin_t *spin, #define SCRATCH 0 #define BATCH IGT_SPIN_BATCH const unsigned int devid = intel_get_drm_devid(fd); - const unsigned int gen = intel_gen(devid); + const unsigned int gen = intel_gen_legacy(devid); struct drm_i915_gem_relocation_entry relocs[3], *r; struct drm_i915_gem_execbuffer2 *execbuf; struct drm_i915_gem_exec_object2 *obj; diff --git a/lib/igt_gt.c b/lib/igt_gt.c index d8cccb800..5f566a1f2 100644 --- a/lib/igt_gt.c +++ b/lib/igt_gt.c @@ -68,7 +68,7 @@ static bool has_gpu_reset(int fd) /* Very old kernels did not support the query */ if (reset_query_once == -1) reset_query_once = - (intel_gen(intel_get_drm_devid(fd)) >= 5) ? 1 : 0; + (intel_gen_legacy(intel_get_drm_devid(fd)) >= 5) ? 1 : 0; } return reset_query_once > 0; @@ -468,7 +468,7 @@ void igt_fork_hang_helper(void) fd = drm_open_driver(DRIVER_INTEL); - gen = intel_gen(intel_get_drm_devid(fd)); + gen = intel_gen_legacy(intel_get_drm_devid(fd)); igt_skip_on(gen < 5); igt_fork_helper(&hang_helper) diff --git a/lib/igt_store.c b/lib/igt_store.c index 42ffdc5cd..99eb596ad 100644 --- a/lib/igt_store.c +++ b/lib/igt_store.c @@ -31,7 +31,7 @@ void igt_store_word(int fd, uint64_t ahnd, const intel_ctx_t *ctx, { const int SCRATCH = 0; const int BATCH = 1; - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_relocation_entry reloc; struct drm_i915_gem_execbuffer2 execbuf; diff --git a/lib/instdone.c b/lib/instdone.c index 0cdddca8e..23b5ad54b 100644 --- a/lib/instdone.c +++ b/lib/instdone.c @@ -489,7 +489,7 @@ init_gen12_instdone(uint32_t devid) bool init_instdone_definitions(uint32_t devid) { - if (intel_graphics_ver(devid) >= IP_VER(12, 50)) { + if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 50)) { init_xehp_instdone(); } else if (IS_GEN12(devid)) { init_gen12_instdone(devid); diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c index 3b25a385b..f418e7981 100644 --- a/lib/intel_batchbuffer.c +++ b/lib/intel_batchbuffer.c @@ -333,7 +333,7 @@ void igt_blitter_copy(int fd, devid = intel_get_drm_devid(fd); - if (intel_graphics_ver(devid) >= IP_VER(12, 60)) + if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 60)) igt_blitter_fast_copy__raw(fd, ahnd, ctx, NULL, src_handle, src_delta, src_stride, src_tiling, @@ -410,7 +410,7 @@ void igt_blitter_src_copy(int fd, uint32_t batch_handle; uint32_t src_pitch, dst_pitch; uint32_t dst_reloc_offset, src_reloc_offset; - uint32_t gen = intel_gen(intel_get_drm_devid(fd)); + uint32_t gen = intel_gen_legacy(intel_get_drm_devid(fd)); uint64_t batch_offset, src_offset, dst_offset; const bool has_64b_reloc = gen >= 8; int i = 0; @@ -669,7 +669,7 @@ igt_render_copyfunc_t igt_get_render_copyfunc(int fd) copy = mtl_render_copyfunc; else if (IS_DG2(devid)) copy = gen12p71_render_copyfunc; - else if (intel_gen(devid) >= 20) + else if (intel_gen_legacy(devid) >= 20) copy = xe2_render_copyfunc; else if (IS_GEN12(devid)) copy = gen12_render_copyfunc; @@ -729,7 +729,7 @@ igt_fillfunc_t igt_get_media_fillfunc(int devid) { igt_fillfunc_t fill = NULL; - if (intel_graphics_ver(devid) >= IP_VER(12, 50)) { + if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 50)) { /* current implementation defeatured PIPELINE_MEDIA */ } else if (IS_GEN12(devid)) fill = gen12_media_fillfunc; @@ -767,7 +767,7 @@ igt_fillfunc_t igt_get_gpgpu_fillfunc(int devid) { igt_fillfunc_t fill = NULL; - if (intel_graphics_ver(devid) >= IP_VER(12, 50)) + if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 50)) fill = xehp_gpgpu_fillfunc; else if (IS_GEN12(devid)) fill = gen12_gpgpu_fillfunc; @@ -911,7 +911,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg, igt_assert(ibb); ibb->devid = intel_get_drm_devid(fd); - ibb->gen = intel_gen(ibb->devid); + ibb->gen = intel_gen_legacy(ibb->devid); ibb->ctx = ctx; ibb->fd = fd; @@ -1089,7 +1089,7 @@ struct intel_bb *intel_bb_create_with_allocator(int fd, uint32_t ctx, uint32_t v static bool aux_needs_softpin(int fd) { - return intel_gen(intel_get_drm_devid(fd)) >= 12; + return intel_gen_legacy(intel_get_drm_devid(fd)) >= 12; } static bool has_ctx_cfg(struct intel_bb *ibb) diff --git a/lib/intel_blt.c b/lib/intel_blt.c index 2b59cc7e9..673f204b0 100644 --- a/lib/intel_blt.c +++ b/lib/intel_blt.c @@ -997,7 +997,7 @@ uint64_t emit_blt_block_copy(int fd, uint64_t bb_pos, bool emit_bbe) { - unsigned int ip_ver = intel_graphics_ver(intel_get_drm_devid(fd)); + unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); struct gen12_block_copy_data data = {}; struct gen12_block_copy_data_ext dext = {}; uint64_t dst_offset, src_offset, bb_offset; @@ -1285,7 +1285,7 @@ uint64_t emit_blt_ctrl_surf_copy(int fd, uint64_t bb_pos, bool emit_bbe) { - unsigned int ip_ver = intel_graphics_ver(intel_get_drm_devid(fd)); + unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); union ctrl_surf_copy_data data = { }; size_t data_sz; uint64_t dst_offset, src_offset, bb_offset, alignment; @@ -1705,7 +1705,7 @@ uint64_t emit_blt_fast_copy(int fd, uint64_t bb_pos, bool emit_bbe) { - unsigned int ip_ver = intel_graphics_ver(intel_get_drm_devid(fd)); + unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); struct gen12_fast_copy_data data = {}; uint64_t dst_offset, src_offset, bb_offset; uint32_t bbe = MI_BATCH_BUFFER_END; @@ -1976,7 +1976,7 @@ static void dump_bb_mem_copy_cmd(int fd, struct xe_mem_copy_data *data) igt_info("BB details:\n"); - if (intel_graphics_ver(devid) >= IP_VER(20, 0)) { + if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { igt_info(" dw00: [%08x] <client: 0x%x, opcode: 0x%x, length: %d> " "[copy type: %d, mode: %d]\n", cmd[0], data->dw00.xe2.client, data->dw00.xe2.opcode, @@ -2006,7 +2006,7 @@ static void dump_bb_mem_copy_cmd(int fd, struct xe_mem_copy_data *data) cmd[7], data->dw07.dst_address_lo); igt_info(" dw08: [%08x] dst offset hi (0x%x)\n", cmd[8], data->dw08.dst_address_hi); - if (intel_graphics_ver(devid) >= IP_VER(20, 0)) { + if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { igt_info(" dw09: [%08x] mocs <dst: 0x%x, src: 0x%x>\n", cmd[9], data->dw09.xe2.dst_mocs, data->dw09.xe2.src_mocs); @@ -2049,7 +2049,7 @@ static uint64_t emit_blt_mem_copy(int fd, uint64_t ahnd, width = mem->src.width; height = mem->dst.height; - if (intel_graphics_ver(devid) >= IP_VER(20, 0)) { + if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { data.dw00.xe2.client = 0x2; data.dw00.xe2.opcode = 0x5a; data.dw00.xe2.length = 8; @@ -2246,7 +2246,7 @@ static void emit_blt_mem_set(int fd, uint64_t ahnd, batch[b++] = mem->dst.pitch - 1; batch[b++] = dst_offset; batch[b++] = dst_offset << 32; - if (intel_graphics_ver(devid) >= IP_VER(20, 0)) + if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) batch[b++] = value | (mem->dst.mocs_index << 3); else batch[b++] = value | mem->dst.mocs_index; @@ -2364,7 +2364,7 @@ blt_create_object(const struct blt_copy_data *blt, uint32_t region, if (create_mapping && region != system_memory(blt->fd)) flags |= DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM; - if (intel_gen(intel_get_drm_devid(blt->fd)) >= 20 && compression) { + if (intel_gen_legacy(intel_get_drm_devid(blt->fd)) >= 20 && compression) { pat_index = intel_get_pat_idx_uc_comp(blt->fd); cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; } @@ -2590,7 +2590,7 @@ void blt_surface_get_flatccs_data(int fd, cpu_caching = __xe_default_cpu_caching(fd, sysmem, 0); ccs_bo_size = ALIGN(ccssize, xe_get_default_alignment(fd)); - if (intel_gen(intel_get_drm_devid(fd)) >= 20 && obj->compression) { + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 20 && obj->compression) { comp_pat_index = intel_get_pat_idx_uc_comp(fd); cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; } diff --git a/lib/intel_blt.h b/lib/intel_blt.h index 78037fd35..a98a34e95 100644 --- a/lib/intel_blt.h +++ b/lib/intel_blt.h @@ -52,7 +52,7 @@ #include "igt.h" #include "intel_cmds_info.h" -#define CCS_RATIO(fd) (intel_gen(intel_get_drm_devid(fd)) >= 20 ? 512 : 256) +#define CCS_RATIO(fd) (intel_gen_legacy(intel_get_drm_devid(fd)) >= 20 ? 512 : 256) #define GEN12_MEM_COPY_MOCS_SHIFT 25 #define XE2_MEM_COPY_SRC_MOCS_SHIFT 28 #define XE2_MEM_COPY_DST_MOCS_SHIFT 3 diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c index 1196069a5..ea3742f1e 100644 --- a/lib/intel_bufops.c +++ b/lib/intel_bufops.c @@ -1063,7 +1063,7 @@ static void __intel_buf_init(struct buf_ops *bops, } else { uint16_t cpu_caching = __xe_default_cpu_caching(bops->fd, region, 0); - if (intel_gen(bops->devid) >= 20 && compression) + if (intel_gen_legacy(bops->devid) >= 20 && compression) cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; bo_size = ALIGN(bo_size, xe_get_default_alignment(bops->fd)); @@ -1106,7 +1106,7 @@ void intel_buf_init(struct buf_ops *bops, uint64_t region; uint8_t pat_index = DEFAULT_PAT_INDEX; - if (compression && intel_gen(bops->devid) >= 20) + if (compression && intel_gen_legacy(bops->devid) >= 20) pat_index = intel_get_pat_idx_uc_comp(bops->fd); region = bops->driver == INTEL_DRIVER_I915 ? I915_SYSTEM_MEMORY : @@ -1132,7 +1132,7 @@ void intel_buf_init_in_region(struct buf_ops *bops, { uint8_t pat_index = DEFAULT_PAT_INDEX; - if (compression && intel_gen(bops->devid) >= 20) + if (compression && intel_gen_legacy(bops->devid) >= 20) pat_index = intel_get_pat_idx_uc_comp(bops->fd); __intel_buf_init(bops, 0, buf, width, height, bpp, alignment, @@ -1203,7 +1203,7 @@ void intel_buf_init_using_handle_and_size(struct buf_ops *bops, igt_assert(handle); igt_assert(size); - if (compression && intel_gen(bops->devid) >= 20) + if (compression && intel_gen_legacy(bops->devid) >= 20) pat_index = intel_get_pat_idx_uc_comp(bops->fd); __intel_buf_init(bops, handle, buf, width, height, bpp, alignment, @@ -1758,7 +1758,7 @@ static struct buf_ops *__buf_ops_create(int fd, bool check_idempotency) igt_assert(bops); devid = intel_get_drm_devid(fd); - generation = intel_gen(devid); + generation = intel_gen_legacy(devid); /* Predefined settings: see intel_device_info? */ for (int i = 0; i < ARRAY_SIZE(buf_ops_arr); i++) { diff --git a/lib/intel_chipset.h b/lib/intel_chipset.h index cc2225110..fb360268d 100644 --- a/lib/intel_chipset.h +++ b/lib/intel_chipset.h @@ -103,8 +103,8 @@ struct intel_device_info { const struct intel_device_info *intel_get_device_info(uint16_t devid) __attribute__((pure)); const struct intel_cmds_info *intel_get_cmds_info(uint16_t devid) __attribute__((pure)); -unsigned intel_gen(uint16_t devid) __attribute__((pure)); -unsigned intel_graphics_ver(uint16_t devid) __attribute__((pure)); +unsigned intel_gen_legacy(uint16_t devid) __attribute__((pure)); +unsigned intel_graphics_ver_legacy(uint16_t devid) __attribute__((pure)); unsigned intel_display_ver(uint16_t devid) __attribute__((pure)); extern enum pch_type intel_pch; @@ -230,12 +230,12 @@ void intel_check_pch(void); #define IS_GEN12(devid) IS_GEN(devid, 12) #define IS_MOBILE(devid) (intel_get_device_info(devid)->is_mobile) -#define IS_965(devid) (intel_gen(devid) >= 4) +#define IS_965(devid) (intel_gen_legacy(devid) >= 4) -#define HAS_BSD_RING(devid) (intel_gen(devid) >= 5) -#define HAS_BLT_RING(devid) (intel_gen(devid) >= 6) +#define HAS_BSD_RING(devid) (intel_gen_legacy(devid) >= 5) +#define HAS_BLT_RING(devid) (intel_gen_legacy(devid) >= 6) -#define HAS_PCH_SPLIT(devid) (intel_gen(devid) >= 5 && \ +#define HAS_PCH_SPLIT(devid) (intel_gen_legacy(devid) >= 5 && \ !(IS_VALLEYVIEW(devid) || \ IS_CHERRYVIEW(devid) || \ IS_BROXTON(devid))) diff --git a/lib/intel_common.c b/lib/intel_common.c index 8b8f4652a..cd1019bfe 100644 --- a/lib/intel_common.c +++ b/lib/intel_common.c @@ -91,7 +91,7 @@ bool is_intel_region_compressible(int fd, uint64_t region) return true; /* Integrated Xe2+ supports compression on system memory */ - if (intel_gen(devid) >= 20 && !is_dgfx && is_intel_system_region(fd, region)) + if (intel_gen_legacy(devid) >= 20 && !is_dgfx && is_intel_system_region(fd, region)) return true; /* Discrete supports compression on vram */ diff --git a/lib/intel_compute.c b/lib/intel_compute.c index 00ef280e8..1734c1649 100644 --- a/lib/intel_compute.c +++ b/lib/intel_compute.c @@ -2284,7 +2284,7 @@ static bool __run_intel_compute_kernel(int fd, struct user_execenv *user, enum execenv_alloc_prefs alloc_prefs) { - unsigned int ip_ver = intel_graphics_ver(intel_get_drm_devid(fd)); + unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); int batch; const struct intel_compute_kernels *kernel_entries = intel_compute_square_kernels, *kernels; enum intel_driver driver = get_intel_driver(fd); @@ -2749,7 +2749,7 @@ static bool __run_intel_compute_kernel_preempt(int fd, bool threadgroup_preemption, enum execenv_alloc_prefs alloc_prefs) { - unsigned int ip_ver = intel_graphics_ver(intel_get_drm_devid(fd)); + unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); int batch; const struct intel_compute_kernels *kernel_entries = intel_compute_square_kernels, *kernels; enum intel_driver driver = get_intel_driver(fd); @@ -2803,7 +2803,7 @@ static bool __run_intel_compute_kernel_preempt(int fd, */ bool xe_kernel_preempt_check(int fd, enum xe_compute_preempt_type required_preempt) { - unsigned int ip_ver = intel_graphics_ver(intel_get_drm_devid(fd)); + unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); int batch = find_preempt_batch(ip_ver); if (batch < 0) { diff --git a/lib/intel_device_info.c b/lib/intel_device_info.c index 89fa6788f..2657dbdbb 100644 --- a/lib/intel_device_info.c +++ b/lib/intel_device_info.c @@ -739,7 +739,7 @@ const struct intel_cmds_info *intel_get_cmds_info(uint16_t devid) } /** - * intel_gen: + * intel_gen_legacy: * @devid: pci device id * * Computes the Intel GFX generation for the given device id. @@ -747,12 +747,12 @@ const struct intel_cmds_info *intel_get_cmds_info(uint16_t devid) * Returns: * The GFX generation on successful lookup, -1u on failure. */ -unsigned intel_gen(uint16_t devid) +unsigned intel_gen_legacy(uint16_t devid) { return intel_get_device_info(devid)->graphics_ver ?: -1u; } -unsigned intel_graphics_ver(uint16_t devid) +unsigned intel_graphics_ver_legacy(uint16_t devid) { const struct intel_device_info *info = intel_get_device_info(devid); diff --git a/lib/intel_mmio.c b/lib/intel_mmio.c index 267d07b39..9fadb2897 100644 --- a/lib/intel_mmio.c +++ b/lib/intel_mmio.c @@ -152,7 +152,7 @@ intel_mmio_use_pci_bar(struct intel_mmio_data *mmio_data, struct pci_device *pci else mmio_bar = 0; - gen = intel_gen(devid); + gen = intel_gen_legacy(devid); if (gen >= 12) mmio_size = pci_dev->regions[mmio_bar].size; else if (gen >= 5) @@ -228,7 +228,7 @@ intel_register_access_init(struct intel_mmio_data *mmio_data, struct pci_device igt_assert(mmio_data->igt_mmio != NULL); mmio_data->safe = (safe != 0 && - intel_gen(pci_dev->device_id) >= 4) ? true : false; + intel_gen_legacy(pci_dev->device_id) >= 4) ? true : false; mmio_data->pci_device_id = pci_dev->device_id; if (mmio_data->safe) mmio_data->map = intel_get_register_map(mmio_data->pci_device_id); @@ -304,7 +304,7 @@ intel_register_read(struct intel_mmio_data *mmio_data, uint32_t reg) struct intel_register_range *range; uint32_t ret; - if (intel_gen(mmio_data->pci_device_id) >= 6) + if (intel_gen_legacy(mmio_data->pci_device_id) >= 6) igt_assert(mmio_data->key != -1); if (!mmio_data->safe) @@ -343,7 +343,7 @@ intel_register_write(struct intel_mmio_data *mmio_data, uint32_t reg, uint32_t v { struct intel_register_range *range; - if (intel_gen(mmio_data->pci_device_id) >= 6) + if (intel_gen_legacy(mmio_data->pci_device_id) >= 6) igt_assert(mmio_data->key != -1); if (!mmio_data->safe) diff --git a/lib/intel_mocs.c b/lib/intel_mocs.c index 778fd848e..f21c2bf09 100644 --- a/lib/intel_mocs.c +++ b/lib/intel_mocs.c @@ -28,7 +28,7 @@ struct drm_intel_mocs_index { static void get_mocs_index(int fd, struct drm_intel_mocs_index *mocs) { uint16_t devid = intel_get_drm_devid(fd); - unsigned int ip_ver = intel_graphics_ver(devid); + unsigned int ip_ver = intel_graphics_ver_legacy(devid); /* * Gen >= 12 onwards don't have a setting for PTE, @@ -126,7 +126,7 @@ uint8_t intel_get_defer_to_pat_mocs_index(int fd) struct drm_intel_mocs_index mocs; uint16_t dev_id = intel_get_drm_devid(fd); - igt_assert(intel_gen(dev_id) >= 20); + igt_assert(intel_gen_legacy(dev_id) >= 20); get_mocs_index(fd, &mocs); diff --git a/lib/intel_pat.c b/lib/intel_pat.c index 9815efc18..9a61c2a45 100644 --- a/lib/intel_pat.c +++ b/lib/intel_pat.c @@ -98,7 +98,7 @@ static void intel_get_pat_idx(int fd, struct intel_pat_cache *pat) { uint16_t dev_id = intel_get_drm_devid(fd); - if (intel_graphics_ver(dev_id) == IP_VER(35, 11)) { + if (intel_graphics_ver_legacy(dev_id) == IP_VER(35, 11)) { pat->uc = 3; pat->wb = 2; pat->max_index = 31; @@ -111,7 +111,7 @@ static void intel_get_pat_idx(int fd, struct intel_pat_cache *pat) pat->max_index = 31; /* Wa_16023588340: CLOS3 entries at end of table are unusable */ - if (intel_graphics_ver(dev_id) == IP_VER(20, 1)) + if (intel_graphics_ver_legacy(dev_id) == IP_VER(20, 1)) pat->max_index -= 4; } else if (IS_METEORLAKE(dev_id)) { pat->uc = 2; @@ -123,7 +123,7 @@ static void intel_get_pat_idx(int fd, struct intel_pat_cache *pat) pat->wt = 2; pat->wb = 3; pat->max_index = 7; - } else if (intel_graphics_ver(dev_id) <= IP_VER(12, 60)) { + } else if (intel_graphics_ver_legacy(dev_id) <= IP_VER(12, 60)) { pat->uc = 3; pat->wt = 2; pat->wb = 0; @@ -154,7 +154,7 @@ uint8_t intel_get_pat_idx_uc_comp(int fd) struct intel_pat_cache pat = {}; uint16_t dev_id = intel_get_drm_devid(fd); - igt_assert(intel_gen(dev_id) >= 20); + igt_assert(intel_gen_legacy(dev_id) >= 20); intel_get_pat_idx(fd, &pat); return pat.uc_comp; diff --git a/lib/intel_reg_map.c b/lib/intel_reg_map.c index 0e2ee06c8..12ff3f6c4 100644 --- a/lib/intel_reg_map.c +++ b/lib/intel_reg_map.c @@ -131,7 +131,7 @@ struct intel_register_map intel_get_register_map(uint32_t devid) { struct intel_register_map map; - const int gen = intel_gen(devid); + const int gen = intel_gen_legacy(devid); if (gen >= 6) { map.map = gen6_gt_register_map; diff --git a/lib/ioctl_wrappers.c b/lib/ioctl_wrappers.c index ef7221470..382d6334d 100644 --- a/lib/ioctl_wrappers.c +++ b/lib/ioctl_wrappers.c @@ -1072,7 +1072,7 @@ void gem_require_ring(int fd, unsigned ring) */ bool gem_has_mocs_registers(int fd) { - return intel_gen(intel_get_drm_devid(fd)) >= 9; + return intel_gen_legacy(intel_get_drm_devid(fd)) >= 9; } /** diff --git a/lib/rendercopy_gen9.c b/lib/rendercopy_gen9.c index e6e5b8214..66415212c 100644 --- a/lib/rendercopy_gen9.c +++ b/lib/rendercopy_gen9.c @@ -256,12 +256,12 @@ gen9_bind_buf(struct intel_bb *ibb, const struct intel_buf *buf, int is_dst, if (buf->compression == I915_COMPRESSION_MEDIA) ss->ss7.tgl.media_compression = 1; else if (buf->compression == I915_COMPRESSION_RENDER) { - if (intel_gen(ibb->devid) >= 20) + if (intel_gen_legacy(ibb->devid) >= 20) ss->ss6.aux_mode = 0x0; /* AUX_NONE, unified compression */ else ss->ss6.aux_mode = 0x5; /* AUX_CCS_E */ - if (intel_gen(ibb->devid) < 12 && buf->ccs[0].stride) { + if (intel_gen_legacy(ibb->devid) < 12 && buf->ccs[0].stride) { ss->ss6.aux_pitch = (buf->ccs[0].stride / 128) - 1; address = intel_bb_offset_reloc_with_delta(ibb, buf->handle, @@ -303,7 +303,7 @@ gen9_bind_buf(struct intel_bb *ibb, const struct intel_buf *buf, int is_dst, ss->ss7.dg2.disable_support_for_multi_gpu_partial_writes = 1; ss->ss7.dg2.disable_support_for_multi_gpu_atomics = 1; - if (intel_gen(ibb->devid) >= 20) + if (intel_gen_legacy(ibb->devid) >= 20) ss->ss12.lnl.compression_format = lnl_compression_format(buf); else ss->ss12.dg2.compression_format = dg2_compression_format(buf); @@ -681,7 +681,7 @@ gen9_emit_state_base_address(struct intel_bb *ibb) { /* WaBindlessSurfaceStateModifyEnable:skl,bxt */ /* The length has to be one less if we dont modify bindless state */ - if (intel_gen(intel_get_drm_devid(ibb->fd)) >= 20) + if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) intel_bb_out(ibb, GEN4_STATE_BASE_ADDRESS | 20); else intel_bb_out(ibb, GEN4_STATE_BASE_ADDRESS | (19 - 1 - 2)); @@ -726,7 +726,7 @@ gen9_emit_state_base_address(struct intel_bb *ibb) { intel_bb_out(ibb, 0); intel_bb_out(ibb, 0); - if (intel_gen(intel_get_drm_devid(ibb->fd)) >= 20) { + if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) { /* Bindless sampler */ intel_bb_out(ibb, 0); intel_bb_out(ibb, 0); @@ -899,7 +899,7 @@ gen9_emit_ds(struct intel_bb *ibb) { static void gen8_emit_wm_hz_op(struct intel_bb *ibb) { - if (intel_gen(intel_get_drm_devid(ibb->fd)) >= 20) { + if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) { intel_bb_out(ibb, GEN8_3DSTATE_WM_HZ_OP | (6-2)); intel_bb_out(ibb, 0); } else { @@ -989,7 +989,7 @@ gen8_emit_ps(struct intel_bb *ibb, uint32_t kernel, bool fast_clear) { intel_bb_out(ibb, 0); intel_bb_out(ibb, GEN7_3DSTATE_PS | (12-2)); - if (intel_gen(intel_get_drm_devid(ibb->fd)) >= 20) + if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) intel_bb_out(ibb, kernel | 1); else intel_bb_out(ibb, kernel); @@ -1006,7 +1006,7 @@ gen8_emit_ps(struct intel_bb *ibb, uint32_t kernel, bool fast_clear) { intel_bb_out(ibb, (max_threads - 1) << GEN8_3DSTATE_PS_MAX_THREADS_SHIFT | GEN6_3DSTATE_WM_16_DISPATCH_ENABLE | (fast_clear ? GEN8_3DSTATE_FAST_CLEAR_ENABLE : 0)); - if (intel_gen(intel_get_drm_devid(ibb->fd)) >= 20) + if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) intel_bb_out(ibb, 6 << GEN6_3DSTATE_WM_DISPATCH_START_GRF_0_SHIFT | GENXE_KERNEL0_POLY_PACK16_FIXED << GENXE_KERNEL0_PACKING_POLICY); else @@ -1061,7 +1061,7 @@ gen9_emit_depth(struct intel_bb *ibb) static void gen7_emit_clear(struct intel_bb *ibb) { - if (intel_gen(intel_get_drm_devid(ibb->fd)) >= 20) + if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) return; intel_bb_out(ibb, GEN7_3DSTATE_CLEAR_PARAMS | (3-2)); @@ -1072,7 +1072,7 @@ gen7_emit_clear(struct intel_bb *ibb) { static void gen6_emit_drawing_rectangle(struct intel_bb *ibb, const struct intel_buf *dst) { - if (intel_gen(intel_get_drm_devid(ibb->fd)) >= 20) + if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) intel_bb_out(ibb, GENXE2_3DSTATE_DRAWING_RECTANGLE_FAST | (4 - 2)); else intel_bb_out(ibb, GEN4_3DSTATE_DRAWING_RECTANGLE | (4 - 2)); @@ -1266,7 +1266,7 @@ void _gen9_render_op(struct intel_bb *ibb, gen9_emit_state_base_address(ibb); - if (HAS_4TILE(ibb->devid) || intel_gen(ibb->devid) > 12) { + if (HAS_4TILE(ibb->devid) || intel_gen_legacy(ibb->devid) > 12) { intel_bb_out(ibb, GEN4_3DSTATE_BINDING_TABLE_POOL_ALLOC | 2); intel_bb_emit_reloc(ibb, ibb->handle, I915_GEM_DOMAIN_RENDER | I915_GEM_DOMAIN_INSTRUCTION, 0, diff --git a/lib/xe/xe_legacy.c b/lib/xe/xe_legacy.c index 084445305..1529ed1cc 100644 --- a/lib/xe/xe_legacy.c +++ b/lib/xe/xe_legacy.c @@ -75,7 +75,7 @@ xe_legacy_test_mode(int fd, struct drm_xe_engine_class_instance *eci, igt_assert_lte(n_exec_queues, MAX_N_EXECQUEUES); if (flags & COMPRESSION) - igt_require(intel_gen(intel_get_drm_devid(fd)) >= 20); + igt_require(intel_gen_legacy(intel_get_drm_devid(fd)) >= 20); if (flags & CLOSE_FD) fd = drm_open_driver(DRIVER_XE); diff --git a/lib/xe/xe_oa.c b/lib/xe/xe_oa.c index 229deafa7..1d1ad9d7e 100644 --- a/lib/xe/xe_oa.c +++ b/lib/xe/xe_oa.c @@ -303,7 +303,7 @@ intel_xe_perf_for_devinfo(uint32_t device_id, intel_xe_perf_load_metrics_bmg(perf); } else if (devinfo->is_pantherlake) { intel_xe_perf_load_metrics_ptl(perf); - } else if (intel_graphics_ver(device_id) >= IP_VER(20, 0)) { + } else if (intel_graphics_ver_legacy(device_id) >= IP_VER(20, 0)) { intel_xe_perf_load_metrics_lnl(perf); } else { return unsupported_xe_oa_platform(perf); @@ -455,7 +455,7 @@ xe_fill_topology_info(int drm_fd, uint32_t device_id, uint32_t *topology_size) u8 *ptr; /* Only ADL-P, DG2 and newer ip support hwconfig, use hardcoded values for previous */ - if (intel_graphics_ver(device_id) >= IP_VER(12, 55) || devinfo->is_alderlake_p) { + if (intel_graphics_ver_legacy(device_id) >= IP_VER(12, 55) || devinfo->is_alderlake_p) { query_hwconfig(drm_fd, &topinfo); } else { topinfo.max_slices = 1; diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c index 4dc110c22..36260e3e5 100644 --- a/lib/xe/xe_spin.c +++ b/lib/xe/xe_spin.c @@ -167,7 +167,7 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts) spin->batch[b++] = opts->mem_copy->dst_offset << 32; devid = intel_get_drm_devid(opts->mem_copy->fd); - if (intel_graphics_ver(devid) >= IP_VER(20, 0)) + if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) spin->batch[b++] = opts->mem_copy->src->mocs_index << XE2_MEM_COPY_SRC_MOCS_SHIFT | opts->mem_copy->dst->mocs_index << XE2_MEM_COPY_DST_MOCS_SHIFT; else diff --git a/lib/xe/xe_sriov_provisioning.c b/lib/xe/xe_sriov_provisioning.c index 116cd3255..7b60ccd6c 100644 --- a/lib/xe/xe_sriov_provisioning.c +++ b/lib/xe/xe_sriov_provisioning.c @@ -52,7 +52,7 @@ static uint64_t get_vfid_mask(int fd) { uint16_t dev_id = intel_get_drm_devid(fd); - return (intel_graphics_ver(dev_id) >= IP_VER(12, 50)) ? + return (intel_graphics_ver_legacy(dev_id) >= IP_VER(12, 50)) ? GGTT_PTE_VFID_MASK : PRE_1250_IP_VER_GGTT_PTE_VFID_MASK; } diff --git a/tests/intel/api_intel_allocator.c b/tests/intel/api_intel_allocator.c index 464576d1b..869e5e9a0 100644 --- a/tests/intel/api_intel_allocator.c +++ b/tests/intel/api_intel_allocator.c @@ -625,7 +625,7 @@ static void execbuf_with_allocator(int fd) uint64_t ahnd, sz = 4096, gtt_size; unsigned int flags = EXEC_OBJECT_PINNED; uint32_t *ptr, batch[32], copied; - int gen = intel_gen(intel_get_drm_devid(fd)); + int gen = intel_gen_legacy(intel_get_drm_devid(fd)); int i; const uint32_t magic = 0x900df00d; diff --git a/tests/intel/api_intel_bb.c b/tests/intel/api_intel_bb.c index 67e923cef..98cc3d665 100644 --- a/tests/intel/api_intel_bb.c +++ b/tests/intel/api_intel_bb.c @@ -1052,7 +1052,7 @@ static void do_intel_bb_blit(struct buf_ops *bops, int loops, uint32_t tiling) gem_require_blitter(i915); /* We'll fix it for gen2/3 later. */ - igt_require(intel_gen(intel_get_drm_devid(i915)) > 3); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) > 3); for (i = 0; i < loops; i++) { fails += __do_intel_bb_blit(bops, tiling); @@ -1316,10 +1316,10 @@ static int render(struct buf_ops *bops, uint32_t tiling, bool do_reloc, uint32_t devid = intel_get_drm_devid(i915); igt_render_copyfunc_t render_copy = NULL; - igt_debug("%s() gen: %d\n", __func__, intel_gen(devid)); + igt_debug("%s() gen: %d\n", __func__, intel_gen_legacy(devid)); /* Don't use relocations on gen12+ */ - igt_require((do_reloc && intel_gen(devid) < 12) || + igt_require((do_reloc && intel_gen_legacy(devid) < 12) || !do_reloc); if (do_reloc) @@ -1597,7 +1597,7 @@ int igt_main_args("dpibc:", NULL, help_str, opt_handler, NULL) igt_fixture() { i915 = drm_open_driver(DRIVER_INTEL); bops = buf_ops_create(i915); - gen = intel_gen(intel_get_drm_devid(i915)); + gen = intel_gen_legacy(intel_get_drm_devid(i915)); } igt_describe("Ensure reset is possible on fresh bb"); @@ -1659,7 +1659,7 @@ int igt_main_args("dpibc:", NULL, help_str, opt_handler, NULL) do_intel_bb_blit(bops, 10, I915_TILING_X); igt_subtest("intel-bb-blit-y") { - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 6); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 6); do_intel_bb_blit(bops, 10, I915_TILING_Y); } diff --git a/tests/intel/gem_bad_reloc.c b/tests/intel/gem_bad_reloc.c index 8a5f4eae4..5be7a776d 100644 --- a/tests/intel/gem_bad_reloc.c +++ b/tests/intel/gem_bad_reloc.c @@ -84,7 +84,7 @@ static void negative_reloc(int fd, unsigned flags) uint64_t *offsets; int i; - igt_require(intel_gen(intel_get_drm_devid(fd)) >= 7); + igt_require(intel_gen_legacy(intel_get_drm_devid(fd)) >= 7); memset(&obj, 0, sizeof(obj)); obj.handle = gem_create(fd, 8192); @@ -135,7 +135,7 @@ static void negative_reloc(int fd, unsigned flags) static void negative_reloc_blt(int fd) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_execbuffer2 execbuf; struct drm_i915_gem_exec_object2 obj[1024][2]; struct drm_i915_gem_relocation_entry reloc; diff --git a/tests/intel/gem_blits.c b/tests/intel/gem_blits.c index 3f7fb1564..dae8e6ee4 100644 --- a/tests/intel/gem_blits.c +++ b/tests/intel/gem_blits.c @@ -830,7 +830,7 @@ int igt_main() gem_require_blitter(device.fd); device.pciid = intel_get_drm_devid(device.fd); - device.gen = intel_gen(device.pciid); + device.gen = intel_gen_legacy(device.pciid); device.llc = gem_has_llc(device.fd); device.ahnd = get_reloc_ahnd(device.fd, 0); } diff --git a/tests/intel/gem_close_race.c b/tests/intel/gem_close_race.c index b2688774d..449d038cb 100644 --- a/tests/intel/gem_close_race.c +++ b/tests/intel/gem_close_race.c @@ -347,7 +347,7 @@ int igt_main() igt_require_gem(fd); devid = intel_get_drm_devid(fd); - has_64bit_relocations = intel_gen(devid) >= 8; + has_64bit_relocations = intel_gen_legacy(devid) >= 8; has_softpin = !gem_has_relocations(fd); exec_addr = gem_detect_safe_start_offset(fd); data_addr = gem_detect_safe_alignment(fd); diff --git a/tests/intel/gem_concurrent_all.c b/tests/intel/gem_concurrent_all.c index 641888331..398a67438 100644 --- a/tests/intel/gem_concurrent_all.c +++ b/tests/intel/gem_concurrent_all.c @@ -1904,7 +1904,7 @@ int igt_main() igt_require_gem(fd); intel_detect_and_clear_missed_interrupts(fd); devid = intel_get_drm_devid(fd); - gen = intel_gen(devid); + gen = intel_gen_legacy(devid); rendercopy = igt_get_render_copyfunc(fd); vgem_drv = __drm_open_driver(DRIVER_VGEM); diff --git a/tests/intel/gem_ctx_create.c b/tests/intel/gem_ctx_create.c index be7d46571..2a9b4851a 100644 --- a/tests/intel/gem_ctx_create.c +++ b/tests/intel/gem_ctx_create.c @@ -309,7 +309,7 @@ static void xchg_ptr(void *array, unsigned i, unsigned j) static unsigned __context_size(int fd) { - switch (intel_gen(intel_get_drm_devid(fd))) { + switch (intel_gen_legacy(intel_get_drm_devid(fd))) { case 0: case 1: case 2: @@ -478,7 +478,7 @@ static void basic_ext_param(int i915) static void check_single_timeline(int i915, uint32_t ctx, int num_engines) { #define RCS_TIMESTAMP (0x2000 + 0x358) - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); const int has_64bit_reloc = gen >= 8; struct drm_i915_gem_exec_object2 results = { .handle = gem_create(i915, 4096) }; const uint32_t bbe = MI_BATCH_BUFFER_END; diff --git a/tests/intel/gem_ctx_engines.c b/tests/intel/gem_ctx_engines.c index de1935ec5..135865b03 100644 --- a/tests/intel/gem_ctx_engines.c +++ b/tests/intel/gem_ctx_engines.c @@ -474,7 +474,7 @@ static void independent(int i915, const intel_ctx_t *base_ctx, const struct intel_execution_engine2 *e) { #define RCS_TIMESTAMP (mmio_base + 0x358) - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); unsigned int mmio_base = gem_engine_mmio_base(i915, e->name); const int has_64bit_reloc = gen >= 8; I915_DEFINE_CONTEXT_PARAM_ENGINES(engines, I915_EXEC_RING_MASK + 1); @@ -571,7 +571,7 @@ static void independent(int i915, const intel_ctx_t *base_ctx, static void independent_all(int i915, const intel_ctx_t *ctx) { - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); const struct intel_execution_engine2 *e; igt_spin_t *spin = NULL; uint64_t ahnd = get_reloc_ahnd(i915, ctx->id); @@ -643,7 +643,7 @@ int igt_main() const intel_ctx_t *ctx; igt_require(gem_scheduler_enabled(i915)); - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 6); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 6); ctx = intel_ctx_create_all_physical(i915); for_each_ctx_engine(i915, ctx, e) { diff --git a/tests/intel/gem_ctx_isolation.c b/tests/intel/gem_ctx_isolation.c index e1585cbc6..590fe7140 100644 --- a/tests/intel/gem_ctx_isolation.c +++ b/tests/intel/gem_ctx_isolation.c @@ -273,7 +273,7 @@ static void tmpl_regs(int fd, uint32_t handle, uint32_t value) { - const unsigned int gen_bit = 1 << intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen_bit = 1 << intel_gen_legacy(intel_get_drm_devid(fd)); const unsigned int engine_bit = ENGINE(e->class, e->instance); const uint32_t mmio_base = gem_engine_mmio_base(fd, e->name); unsigned int regs_size; @@ -318,7 +318,7 @@ static uint32_t read_regs(int fd, const struct intel_execution_engine2 *e, unsigned int flags) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const unsigned int gen_bit = 1 << gen; const unsigned int engine_bit = ENGINE(e->class, e->instance); const uint32_t mmio_base = gem_engine_mmio_base(fd, e->name); @@ -408,7 +408,7 @@ static void write_regs(int fd, uint64_t ahnd, unsigned int flags, uint32_t value) { - const unsigned int gen_bit = 1 << intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen_bit = 1 << intel_gen_legacy(intel_get_drm_devid(fd)); const unsigned int engine_bit = ENGINE(e->class, e->instance); const uint32_t mmio_base = gem_engine_mmio_base(fd, e->name); struct drm_i915_gem_exec_object2 obj; @@ -475,7 +475,7 @@ static void restore_regs(int fd, unsigned int flags, uint32_t regs) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const unsigned int gen_bit = 1 << gen; const unsigned int engine_bit = ENGINE(e->class, e->instance); const uint32_t mmio_base = gem_engine_mmio_base(fd, e->name); @@ -561,7 +561,7 @@ static void dump_regs(int fd, const struct intel_execution_engine2 *e, unsigned int regs) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const unsigned int gen_bit = 1 << gen; const unsigned int engine_bit = ENGINE(e->class, e->instance); const uint32_t mmio_base = gem_engine_mmio_base(fd, e->name); @@ -674,7 +674,7 @@ static void nonpriv(int fd, const intel_ctx_cfg_t *cfg, unsigned int num_values = ARRAY_SIZE(values); /* Sigh -- hsw: we need cmdparser access to our own registers! */ - igt_skip_on(intel_gen(intel_get_drm_devid(fd)) < 8); + igt_skip_on(intel_gen_legacy(intel_get_drm_devid(fd)) < 8); gem_quiescent_gpu(fd); @@ -1022,7 +1022,7 @@ int igt_main() has_context_isolation = __has_context_isolation(i915); igt_require(has_context_isolation); - gen = intel_gen(intel_get_drm_devid(i915)); + gen = intel_gen_legacy(intel_get_drm_devid(i915)); igt_warn_on_f(gen > LAST_KNOWN_GEN, "GEN not recognized! Test needs to be updated to run.\n"); diff --git a/tests/intel/gem_ctx_shared.c b/tests/intel/gem_ctx_shared.c index fc15ecd1f..51e03b833 100644 --- a/tests/intel/gem_ctx_shared.c +++ b/tests/intel/gem_ctx_shared.c @@ -292,7 +292,7 @@ static void exhaust_shared_gtt(int i915, unsigned int flags) static void exec_shared_gtt(int i915, const intel_ctx_cfg_t *cfg, unsigned int ring) { - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); const uint32_t bbe = MI_BATCH_BUFFER_END; struct drm_i915_gem_exec_object2 obj = {}; struct drm_i915_gem_execbuffer2 execbuf = { @@ -556,7 +556,7 @@ static void store_dword(int i915, uint64_t ahnd, const intel_ctx_t *ctx, uint32_t cork, uint64_t cork_size, unsigned write_domain) { - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); struct drm_i915_gem_exec_object2 obj[3]; struct drm_i915_gem_relocation_entry reloc; struct drm_i915_gem_execbuffer2 execbuf; @@ -683,7 +683,7 @@ static uint32_t store_timestamp(int i915, int fence, int offset) { - const bool r64b = intel_gen(intel_get_drm_devid(i915)) >= 8; + const bool r64b = intel_gen_legacy(intel_get_drm_devid(i915)) >= 8; uint32_t handle = gem_create(i915, 4096); struct drm_i915_gem_exec_object2 obj = { .handle = handle, @@ -714,7 +714,7 @@ static uint32_t store_timestamp(int i915, MI_BATCH_BUFFER_END }; - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 7); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 7); gem_write(i915, handle, 0, batch, sizeof(batch)); obj.relocs_ptr = to_user_pointer(&reloc); diff --git a/tests/intel/gem_ctx_sseu.c b/tests/intel/gem_ctx_sseu.c index 20ab14784..f7f48e135 100644 --- a/tests/intel/gem_ctx_sseu.c +++ b/tests/intel/gem_ctx_sseu.c @@ -523,7 +523,7 @@ int igt_main() igt_require_gem(fd); __intel_devid__ = intel_get_drm_devid(fd); - __intel_gen__ = intel_gen(__intel_devid__); + __intel_gen__ = intel_gen_legacy(__intel_devid__); igt_require(kernel_has_per_context_sseu_support(fd)); } diff --git a/tests/intel/gem_eio.c b/tests/intel/gem_eio.c index 2191274ae..48a228772 100644 --- a/tests/intel/gem_eio.c +++ b/tests/intel/gem_eio.c @@ -300,10 +300,10 @@ static igt_spin_t *__spin_poll(int fd, uint64_t ahnd, const intel_ctx_t *ctx, }; if (!gem_engine_has_cmdparser(fd, &ctx->cfg, opts.engine) && - intel_gen(intel_get_drm_devid(fd)) != 6) + intel_gen_legacy(intel_get_drm_devid(fd)) != 6) opts.flags |= IGT_SPIN_INVALID_CS; - if (intel_gen(intel_get_drm_devid(fd)) > 7) + if (intel_gen_legacy(intel_get_drm_devid(fd)) > 7) opts.flags |= IGT_SPIN_FAST; if (gem_can_store_dword(fd, opts.engine)) @@ -420,7 +420,7 @@ static void check_wait_elapsed(const char *prefix, int fd, igt_stats_t *st) * modeset back on) around resets, so may take a lot longer. */ limit = 250e6; - if (intel_gen(intel_get_drm_devid(fd)) < 5 || intel_gen(intel_get_drm_devid(fd)) > 11) + if (intel_gen_legacy(intel_get_drm_devid(fd)) < 5 || intel_gen_legacy(intel_get_drm_devid(fd)) > 11) limit += 300e6; /* guestimate for 2x worstcase modeset */ med = igt_stats_get_median(st); diff --git a/tests/intel/gem_evict_alignment.c b/tests/intel/gem_evict_alignment.c index 0c1f4ac52..4c9e61583 100644 --- a/tests/intel/gem_evict_alignment.c +++ b/tests/intel/gem_evict_alignment.c @@ -88,7 +88,7 @@ copy(int fd, uint32_t dst, uint32_t src, uint32_t *all_bo, batch[i++] = (XY_SRC_COPY_BLT_CMD | XY_SRC_COPY_BLT_WRITE_ALPHA | XY_SRC_COPY_BLT_WRITE_RGB | 6); - if (intel_gen(intel_get_drm_devid(fd)) >= 8) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 8) batch[i - 1] += 2; batch[i++] = (3 << 24) | /* 32 bits */ (0xcc << 16) | /* copy ROP */ @@ -96,12 +96,12 @@ copy(int fd, uint32_t dst, uint32_t src, uint32_t *all_bo, batch[i++] = 0; /* dst x1,y1 */ batch[i++] = (HEIGHT << 16) | WIDTH; /* dst x2,y2 */ batch[i++] = 0; /* dst reloc */ - if (intel_gen(intel_get_drm_devid(fd)) >= 8) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 8) batch[i++] = 0; /* FIXME */ batch[i++] = 0; /* src x1,y1 */ batch[i++] = WIDTH*4; batch[i++] = 0; /* src reloc */ - if (intel_gen(intel_get_drm_devid(fd)) >= 8) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 8) batch[i++] = 0; /* FIXME */ batch[i++] = MI_BATCH_BUFFER_END; batch[i++] = MI_NOOP; diff --git a/tests/intel/gem_evict_everything.c b/tests/intel/gem_evict_everything.c index 28ac67513..439eebcef 100644 --- a/tests/intel/gem_evict_everything.c +++ b/tests/intel/gem_evict_everything.c @@ -132,7 +132,7 @@ copy(int fd, uint32_t dst, uint32_t src, uint32_t *all_bo, int n_bo) batch[i++] = (XY_SRC_COPY_BLT_CMD | XY_SRC_COPY_BLT_WRITE_ALPHA | XY_SRC_COPY_BLT_WRITE_RGB | 6); - if (intel_gen(intel_get_drm_devid(fd)) >= 8) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 8) batch[i - 1] += 2; batch[i++] = (3 << 24) | /* 32 bits */ (0xcc << 16) | /* copy ROP */ @@ -140,12 +140,12 @@ copy(int fd, uint32_t dst, uint32_t src, uint32_t *all_bo, int n_bo) batch[i++] = 0; /* dst x1,y1 */ batch[i++] = (HEIGHT << 16) | WIDTH; /* dst x2,y2 */ batch[i++] = 0; /* dst reloc */ - if (intel_gen(intel_get_drm_devid(fd)) >= 8) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 8) batch[i++] = 0; /* FIXME */ batch[i++] = 0; /* src x1,y1 */ batch[i++] = WIDTH*4; batch[i++] = 0; /* src reloc */ - if (intel_gen(intel_get_drm_devid(fd)) >= 8) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 8) batch[i++] = 0; /* FIXME */ batch[i++] = MI_BATCH_BUFFER_END; batch[i++] = MI_NOOP; @@ -163,7 +163,7 @@ copy(int fd, uint32_t dst, uint32_t src, uint32_t *all_bo, int n_bo) reloc[1].target_handle = src; reloc[1].delta = 0; reloc[1].offset = 7 * sizeof(batch[0]); - if (intel_gen(intel_get_drm_devid(fd)) >= 8) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 8) reloc[1].offset += sizeof(batch[0]); reloc[1].presumed_offset = 0; reloc[1].read_domains = I915_GEM_DOMAIN_RENDER; diff --git a/tests/intel/gem_exec_async.c b/tests/intel/gem_exec_async.c index 9af06bb41..2a554ecd3 100644 --- a/tests/intel/gem_exec_async.c +++ b/tests/intel/gem_exec_async.c @@ -45,7 +45,7 @@ static void store_dword(int fd, int id, const intel_ctx_t *ctx, unsigned ring, uint32_t target, uint64_t target_offset, uint32_t offset, uint32_t value) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_relocation_entry reloc; struct drm_i915_gem_execbuffer2 execbuf; diff --git a/tests/intel/gem_exec_await.c b/tests/intel/gem_exec_await.c index 6a71893d1..e112d11e6 100644 --- a/tests/intel/gem_exec_await.c +++ b/tests/intel/gem_exec_await.c @@ -83,7 +83,7 @@ static void wide(int fd, intel_ctx_cfg_t *cfg, int ring_size, { const struct intel_execution_engine2 *engine; const uint32_t bbe = MI_BATCH_BUFFER_END; - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct { struct drm_i915_gem_exec_object2 *obj; struct drm_i915_gem_exec_object2 exec[2]; diff --git a/tests/intel/gem_exec_balancer.c b/tests/intel/gem_exec_balancer.c index 19c612bb3..9f93954e3 100644 --- a/tests/intel/gem_exec_balancer.c +++ b/tests/intel/gem_exec_balancer.c @@ -2631,7 +2631,7 @@ static int read_ctx_timestamp_frequency(int i915) .value = &value, .param = I915_PARAM_CS_TIMESTAMP_FREQUENCY, }; - if (intel_gen(intel_get_drm_devid(i915)) != 11) + if (intel_gen_legacy(intel_get_drm_devid(i915)) != 11) ioctl(i915, DRM_IOCTL_I915_GETPARAM, &gp); return value; } @@ -2719,7 +2719,7 @@ static void __fairslice(int i915, static void fairslice(int i915) { /* Relative CS mmio */ - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 11); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 11); for (int class = 0; class < 32; class++) { struct i915_engine_class_instance *ci; diff --git a/tests/intel/gem_exec_big.c b/tests/intel/gem_exec_big.c index 5430af47e..e10e84679 100644 --- a/tests/intel/gem_exec_big.c +++ b/tests/intel/gem_exec_big.c @@ -326,7 +326,7 @@ int igt_main() i915 = drm_open_driver(DRIVER_INTEL); igt_require_gem(i915); - use_64bit_relocs = intel_gen(intel_get_drm_devid(i915)) >= 8; + use_64bit_relocs = intel_gen_legacy(intel_get_drm_devid(i915)) >= 8; has_relocs = gem_has_relocations(i915); } diff --git a/tests/intel/gem_exec_capture.c b/tests/intel/gem_exec_capture.c index 15058e28d..3b1c7d172 100644 --- a/tests/intel/gem_exec_capture.c +++ b/tests/intel/gem_exec_capture.c @@ -302,7 +302,7 @@ static void __capture1(int fd, int dir, uint64_t ahnd, const intel_ctx_t *ctx, const struct intel_execution_engine2 *e, uint32_t target, uint64_t target_size, uint32_t region) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_exec_object2 obj[4]; #define SCRATCH 0 #define CAPTURE 1 @@ -470,7 +470,7 @@ __captureN(int fd, int dir, uint64_t ahnd, const intel_ctx_t *ctx, #define INCREMENTAL 0x1 #define ASYNC 0x2 { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_exec_object2 *obj; struct drm_i915_gem_relocation_entry reloc[2]; struct drm_i915_gem_execbuffer2 execbuf; @@ -658,7 +658,7 @@ static bool needs_recoverable_ctx(int fd) return false; devid = intel_get_drm_devid(fd); - return gem_has_lmem(fd) || intel_graphics_ver(devid) > IP_VER(12, 0); + return gem_has_lmem(fd) || intel_graphics_ver_legacy(devid) > IP_VER(12, 0); } #define find_first_available_engine(fd, ctx, e, saved) \ diff --git a/tests/intel/gem_exec_fair.c b/tests/intel/gem_exec_fair.c index ac23714b7..c7d3fc827 100644 --- a/tests/intel/gem_exec_fair.c +++ b/tests/intel/gem_exec_fair.c @@ -143,7 +143,7 @@ static bool has_mi_math(int i915, const struct intel_execution_engine2 *e) { uint32_t devid = intel_get_drm_devid(i915); - if (intel_gen(devid) >= 8) + if (intel_gen_legacy(devid) >= 8) return true; if (!IS_HASWELL(devid)) @@ -195,7 +195,7 @@ static uint64_t div64_u64_round_up(uint64_t x, uint64_t y) static uint64_t ns_to_ctx_ticks(int i915, uint64_t ns) { int f = read_timestamp_frequency(i915); - if (intel_gen(intel_get_drm_devid(i915)) == 11) + if (intel_gen_legacy(intel_get_drm_devid(i915)) == 11) f = 12500000; /* gen11!!! are you feeling alright? CTX vs CS */ return div64_u64_round_up(ns * f, NSEC64); } @@ -212,7 +212,7 @@ static void delay(int i915, uint64_t addr, uint64_t ns) { - const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8; + const int use_64b = intel_gen_legacy(intel_get_drm_devid(i915)) >= 8; const uint32_t base = gem_engine_mmio_base(i915, e->name); const uint32_t runtime = base + (use_64b ? 0x3a8 : 0x358); #define CS_GPR(x) (base + 0x600 + 8 * (x)) @@ -317,7 +317,7 @@ static void tslog(int i915, uint32_t handle, uint64_t addr) { - const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8; + const int use_64b = intel_gen_legacy(intel_get_drm_devid(i915)) >= 8; const uint32_t base = gem_engine_mmio_base(i915, e->name); #define CS_GPR(x) (base + 0x600 + 8 * (x)) #define CS_TIMESTAMP (base + 0x358) @@ -441,7 +441,7 @@ read_ctx_timestamp(int i915, const intel_ctx_t *ctx, .rsvd1 = ctx->id, .flags = e->flags, }; - const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8; + const int use_64b = intel_gen_legacy(intel_get_drm_devid(i915)) >= 8; const uint32_t base = gem_engine_mmio_base(i915, e->name); const uint32_t runtime = base + (use_64b ? 0x3a8 : 0x358); uint32_t *map, *cs; @@ -489,7 +489,7 @@ read_ctx_timestamp(int i915, const intel_ctx_t *ctx, static bool has_ctx_timestamp(int i915, const intel_ctx_cfg_t *cfg, const struct intel_execution_engine2 *e) { - const int gen = intel_gen(intel_get_drm_devid(i915)); + const int gen = intel_gen_legacy(intel_get_drm_devid(i915)); const intel_ctx_t *tmp_ctx; uint32_t timestamp; @@ -587,7 +587,7 @@ static void fair_child(int i915, const intel_ctx_t *ctx, igt_assert_eq(p_fence, -1); aux_flags = 0; - if (intel_gen(intel_get_drm_devid(i915)) < 8) + if (intel_gen_legacy(intel_get_drm_devid(i915)) < 8) aux_flags = I915_EXEC_SECURE; ping.flags |= aux_flags; aux_flags |= e->flags; @@ -734,7 +734,7 @@ static void fairness(int i915, const intel_ctx_cfg_t *cfg, igt_require(has_ctx_timestamp(i915, cfg, e)); igt_require(gem_class_has_mutable_submission(i915, e->class)); if (flags & (F_ISOLATE | F_PING)) - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 8); igt_assert(pipe(lnk.child) == 0); igt_assert(pipe(lnk.parent) == 0); @@ -1018,7 +1018,7 @@ static void deadline_child(int i915, unsigned int seq = 1; int prev = -1, next = -1; - if (intel_gen(intel_get_drm_devid(i915)) < 8) + if (intel_gen_legacy(intel_get_drm_devid(i915)) < 8) execbuf.flags |= I915_EXEC_SECURE; gem_execbuf_wr(i915, &execbuf); @@ -1154,7 +1154,7 @@ static void deadline(int i915, const intel_ctx_cfg_t *cfg, obj[0] = delay_create(i915, delay_ctx, &pe, parent_ns); if (flags & DL_PRIO) gem_context_set_priority(i915, delay_ctx->id, 1023); - if (intel_gen(intel_get_drm_devid(i915)) < 8) + if (intel_gen_legacy(intel_get_drm_devid(i915)) < 8) execbuf.flags |= I915_EXEC_SECURE; for (int n = 1; n <= 5; n++) { int timeline = sw_sync_timeline_create(); diff --git a/tests/intel/gem_exec_fence.c b/tests/intel/gem_exec_fence.c index bc2755031..c67f49e69 100644 --- a/tests/intel/gem_exec_fence.c +++ b/tests/intel/gem_exec_fence.c @@ -293,7 +293,7 @@ static void test_fence_busy(int fd, const intel_ctx_t *ctx, static void test_fence_busy_all(int fd, const intel_ctx_t *ctx, unsigned flags) { const struct intel_execution_engine2 *e; - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_exec_object2 obj; struct drm_i915_gem_relocation_entry reloc; struct drm_i915_gem_execbuffer2 execbuf; @@ -674,7 +674,7 @@ static void test_submitN(int i915, const intel_ctx_t *ctx, igt_require(gem_scheduler_has_semaphores(i915)); igt_require(gem_scheduler_has_preemption(i915)); - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 8); for (int i = 0; i < count; i++) { const intel_ctx_t *tmp_ctx = intel_ctx_create(i915, &ctx->cfg); @@ -721,7 +721,7 @@ static void test_parallel(int i915, const intel_ctx_t *ctx, const struct intel_execution_engine2 *e) { const struct intel_execution_engine2 *e2; - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); uint32_t scratch = gem_create(i915, 4096); uint32_t *out = gem_mmap__device_coherent(i915, scratch, 0, 4096, PROT_READ); uint32_t handle[I915_EXEC_RING_MASK]; @@ -844,7 +844,7 @@ static void test_parallel(int i915, const intel_ctx_t *ctx, static void test_concurrent(int i915, const intel_ctx_t *ctx, const struct intel_execution_engine2 *e) { - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); uint64_t ahnd = get_reloc_ahnd(i915, ctx->id); struct drm_i915_gem_relocation_entry reloc = { .target_handle = gem_create(i915, 4096), @@ -2607,7 +2607,7 @@ static bool use_set_predicate_result(int i915) { uint16_t devid = intel_get_drm_devid(i915); - return intel_graphics_ver(devid) >= IP_VER(12, 50); + return intel_graphics_ver_legacy(devid) >= IP_VER(12, 50); } static struct drm_i915_gem_exec_object2 @@ -3289,7 +3289,7 @@ int igt_main() igt_subtest_with_dynamic("submit") { igt_require(gem_scheduler_has_semaphores(i915)); igt_require(gem_scheduler_has_preemption(i915)); - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 8); for_each_ctx_engine(i915, ctx, e) { igt_dynamic_f("%s", e->name) @@ -3302,7 +3302,7 @@ int igt_main() igt_subtest_with_dynamic("submit3") { igt_require(gem_scheduler_has_semaphores(i915)); igt_require(gem_scheduler_has_preemption(i915)); - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 8); for_each_ctx_engine(i915, ctx, e) { igt_dynamic_f("%s", e->name) @@ -3315,7 +3315,7 @@ int igt_main() igt_subtest_with_dynamic("submit67") { igt_require(gem_scheduler_has_semaphores(i915)); igt_require(gem_scheduler_has_preemption(i915)); - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 8); for_each_ctx_engine(i915, ctx, e) { igt_dynamic_f("%s", e->name) @@ -3512,7 +3512,7 @@ int igt_main() * engines which seems to be there * only on Gen8+ */ - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 8); } igt_describe(test_syncobj_timeline_chain_engines_desc); diff --git a/tests/intel/gem_exec_flush.c b/tests/intel/gem_exec_flush.c index cd8d32810..0155c2824 100644 --- a/tests/intel/gem_exec_flush.c +++ b/tests/intel/gem_exec_flush.c @@ -1579,7 +1579,7 @@ static uint32_t movnt(uint32_t *map, int i) static void run(int fd, unsigned ring, int nchild, int timeout, unsigned flags) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); /* The crux of this testing is whether writes by the GPU are coherent * from the CPU. @@ -1870,7 +1870,7 @@ enum batch_mode { static void batch(int fd, unsigned ring, int nchild, int timeout, enum batch_mode mode, unsigned flags) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); if (mode == BATCH_GTT) gem_require_mappable_ggtt(fd); diff --git a/tests/intel/gem_exec_gttfill.c b/tests/intel/gem_exec_gttfill.c index 4275d2bea..8f179e49c 100644 --- a/tests/intel/gem_exec_gttfill.c +++ b/tests/intel/gem_exec_gttfill.c @@ -141,7 +141,7 @@ static void submit(int fd, uint64_t ahnd, unsigned int gen, static void fillgtt(int fd, const intel_ctx_t *ctx, unsigned ring, int timeout) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_execbuffer2 execbuf; struct drm_i915_gem_relocation_entry reloc[2]; unsigned engines[I915_EXEC_RING_MASK + 1]; diff --git a/tests/intel/gem_exec_latency.c b/tests/intel/gem_exec_latency.c index 36ad5d23a..90af4e372 100644 --- a/tests/intel/gem_exec_latency.c +++ b/tests/intel/gem_exec_latency.c @@ -140,7 +140,7 @@ static void latency_on_ring(int fd, const intel_ctx_t *ctx, const struct intel_execution_engine2 *e, unsigned flags) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const int has_64bit_reloc = gen >= 8; struct drm_i915_gem_exec_object2 obj[3]; struct drm_i915_gem_relocation_entry reloc; @@ -290,7 +290,7 @@ static void latency_from_ring(int fd, const intel_ctx_t *base_ctx, const struct intel_execution_engine2 *e, unsigned flags) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const int has_64bit_reloc = gen >= 8; struct drm_i915_gem_exec_object2 obj[3]; struct drm_i915_gem_relocation_entry reloc; @@ -958,7 +958,7 @@ int igt_main() igt_subtest_group() { igt_fixture() - igt_require(intel_gen(intel_get_drm_devid(device)) >= 7); + igt_require(intel_gen_legacy(intel_get_drm_devid(device)) >= 7); test_each_engine("rthog-submit", device, ctx, e) rthog_latency_on_ring(device, ctx, e); diff --git a/tests/intel/gem_exec_nop.c b/tests/intel/gem_exec_nop.c index 975ec35d0..3856d12fd 100644 --- a/tests/intel/gem_exec_nop.c +++ b/tests/intel/gem_exec_nop.c @@ -154,7 +154,7 @@ static void poll_ring(int fd, const intel_ctx_t *ctx, const struct intel_execution_engine2 *e, int timeout) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_execbuffer2 execbuf; struct drm_i915_gem_exec_object2 obj; struct drm_i915_gem_relocation_entry reloc[4], *r; @@ -265,7 +265,7 @@ static void poll_ring(int fd, const intel_ctx_t *ctx, static void poll_sequential(int fd, const intel_ctx_t *ctx, const char *name, int timeout) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const struct intel_execution_engine2 *e; struct drm_i915_gem_execbuffer2 execbuf; struct drm_i915_gem_exec_object2 obj[2]; diff --git a/tests/intel/gem_exec_parallel.c b/tests/intel/gem_exec_parallel.c index 3cdae1156..4e02efb20 100644 --- a/tests/intel/gem_exec_parallel.c +++ b/tests/intel/gem_exec_parallel.c @@ -255,7 +255,7 @@ static void handle_close(int fd, unsigned int flags, uint32_t handle, void *data static void all(int fd, const intel_ctx_t *ctx, struct intel_execution_engine2 *engine, unsigned flags) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); unsigned engines[I915_EXEC_RING_MASK + 1], nengine; uint32_t scratch[NUMOBJ], handle[NUMOBJ]; struct thread *threads; diff --git a/tests/intel/gem_exec_params.c b/tests/intel/gem_exec_params.c index 3ba4c530b..ca700c5db 100644 --- a/tests/intel/gem_exec_params.c +++ b/tests/intel/gem_exec_params.c @@ -148,7 +148,7 @@ static bool has_resource_streamer(int fd) static void test_batch_first(int fd) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_execbuffer2 execbuf; struct drm_i915_gem_exec_object2 obj[3]; struct drm_i915_gem_relocation_entry reloc[2]; @@ -566,7 +566,7 @@ int igt_main() } igt_subtest("rel-constants-invalid-rel-gen5") { - igt_require(intel_gen(devid) > 5); + igt_require(intel_gen_legacy(devid) > 5); execbuf.flags = I915_EXEC_RENDER | I915_EXEC_CONSTANTS_REL_SURFACE; RUN_FAIL(EINVAL); } @@ -583,7 +583,7 @@ int igt_main() } igt_subtest("sol-reset-not-gen7") { - igt_require(intel_gen(devid) != 7); + igt_require(intel_gen_legacy(devid) != 7); execbuf.flags = I915_EXEC_RENDER | I915_EXEC_GEN7_SOL_RESET; RUN_FAIL(EINVAL); } @@ -632,7 +632,7 @@ int igt_main() /* rsvd1 aka context id is already exercised by gem_ctx_bad_exec */ igt_subtest("cliprects-invalid") { - igt_require(intel_gen(devid) >= 5); + igt_require(intel_gen_legacy(devid) >= 5); execbuf.flags = 0; execbuf.num_cliprects = 1; RUN_FAIL(EINVAL); diff --git a/tests/intel/gem_exec_reloc.c b/tests/intel/gem_exec_reloc.c index eecc20130..6d1f8d06a 100644 --- a/tests/intel/gem_exec_reloc.c +++ b/tests/intel/gem_exec_reloc.c @@ -663,7 +663,7 @@ static void write_dword(int fd, uint64_t target_offset, uint32_t value) { - unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_execbuffer2 execbuf; struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_relocation_entry reloc; @@ -865,7 +865,7 @@ static void check_bo(int fd, uint32_t handle) static void active(int fd, const intel_ctx_t *ctx, unsigned engine) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_relocation_entry reloc; struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_execbuffer2 execbuf; @@ -944,7 +944,7 @@ static void active(int fd, const intel_ctx_t *ctx, unsigned engine) static bool has_64b_reloc(int fd) { - return intel_gen(intel_get_drm_devid(fd)) >= 8; + return intel_gen_legacy(intel_get_drm_devid(fd)) >= 8; } #define NORELOC 1 @@ -1268,7 +1268,7 @@ static void basic_softpin(int fd) static uint64_t concurrent_relocs(int i915, int idx, int count) { struct drm_i915_gem_relocation_entry *reloc; - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); unsigned long sz; int offset; @@ -1371,7 +1371,7 @@ static void concurrent_child(int i915, const intel_ctx_t *ctx, static uint32_t create_concurrent_batch(int i915, unsigned int count) { - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); size_t sz = ALIGN(4 * (1 + 4 * count), 4096); uint32_t handle = gem_create(i915, sz); uint32_t *map, *cs; diff --git a/tests/intel/gem_exec_schedule.c b/tests/intel/gem_exec_schedule.c index da88e81a6..82059866c 100644 --- a/tests/intel/gem_exec_schedule.c +++ b/tests/intel/gem_exec_schedule.c @@ -176,7 +176,7 @@ static uint32_t __store_dword(int fd, uint64_t ahnd, const intel_ctx_t *ctx, uint32_t cork, uint64_t cork_offset, int fence, unsigned write_domain) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_exec_object2 obj[3]; struct drm_i915_gem_relocation_entry reloc; struct drm_i915_gem_execbuffer2 execbuf; @@ -659,7 +659,7 @@ static void timeslice(int i915, const intel_ctx_cfg_t *cfg, */ igt_require(gem_scheduler_has_timeslicing(i915)); - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 8); ctx[0] = intel_ctx_create(i915, cfg); obj.handle = timeslicing_batches(i915, &offset); @@ -761,7 +761,7 @@ static void timesliceN(int i915, const intel_ctx_cfg_t *cfg, */ igt_require(gem_scheduler_has_timeslicing(i915)); - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 8); /* No coupling between requests; free to timeslice */ @@ -796,7 +796,7 @@ static void lateslice(int i915, const intel_ctx_cfg_t *cfg, uint64_t ahnd[3]; igt_require(gem_scheduler_has_timeslicing(i915)); - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 8); ctx = intel_ctx_create(i915, cfg); ahnd[0] = get_reloc_ahnd(i915, ctx->id); @@ -909,7 +909,7 @@ static void submit_slice(int i915, const intel_ctx_cfg_t *cfg, */ igt_require(gem_scheduler_has_timeslicing(i915)); - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 8); igt_require(gem_has_vm(i915)); engine_cfg.vm = gem_vm_create(i915); @@ -1277,7 +1277,7 @@ static void semaphore_resolve(int i915, const intel_ctx_cfg_t *cfg, static void semaphore_noskip(int i915, const intel_ctx_cfg_t *cfg, unsigned long flags) { - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); const struct intel_execution_engine2 *outer, *inner; const intel_ctx_t *ctx0, *ctx1; uint64_t ahnd; @@ -1371,7 +1371,7 @@ noreorder(int i915, const intel_ctx_cfg_t *cfg, unsigned int engine, int prio, unsigned int flags) #define CORKED 0x1 { - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); const struct intel_execution_engine2 *e; struct drm_i915_gem_exec_object2 obj = { .handle = gem_create(i915, 4096), @@ -2305,7 +2305,7 @@ static void wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring) static void reorder_wide(int fd, const intel_ctx_cfg_t *cfg, unsigned ring) { const unsigned int ring_size = gem_submission_measure(fd, cfg, ring); - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const int priorities[] = { MIN_PRIO, MAX_PRIO }; struct drm_i915_gem_relocation_entry reloc; struct drm_i915_gem_exec_object2 obj[2]; @@ -3066,7 +3066,7 @@ static int cmp_u32(const void *A, const void *B) static uint32_t read_ctx_timestamp(int i915, const intel_ctx_t *ctx, const struct intel_execution_engine2 *e) { - const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8; + const int use_64b = intel_gen_legacy(intel_get_drm_devid(i915)) >= 8; const uint32_t base = gem_engine_mmio_base(i915, e->name); struct drm_i915_gem_relocation_entry reloc; struct drm_i915_gem_exec_object2 obj = { @@ -3269,7 +3269,7 @@ int igt_main() igt_subtest_group() { igt_fixture() { igt_require(gem_scheduler_has_timeslicing(fd)); - igt_require(intel_gen(intel_get_drm_devid(fd)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(fd)) >= 8); } test_each_engine("fairslice", fd, ctx, e) diff --git a/tests/intel/gem_exec_store.c b/tests/intel/gem_exec_store.c index 01569ddd6..83aada72c 100644 --- a/tests/intel/gem_exec_store.c +++ b/tests/intel/gem_exec_store.c @@ -71,7 +71,7 @@ IGT_TEST_DESCRIPTION("Exercise store dword functionality using execbuf-ioctl"); static void store_dword(int fd, const intel_ctx_t *ctx, const struct intel_execution_engine2 *e) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_relocation_entry reloc; struct drm_i915_gem_execbuffer2 execbuf; @@ -152,7 +152,7 @@ static void store_cachelines(int fd, const intel_ctx_t *ctx, const struct intel_execution_engine2 *e, unsigned int flags) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_exec_object2 *obj; struct drm_i915_gem_relocation_entry *reloc; struct drm_i915_gem_execbuffer2 execbuf; @@ -248,7 +248,7 @@ static void store_cachelines(int fd, const intel_ctx_t *ctx, static void store_all(int fd, const intel_ctx_t *ctx) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_exec_object2 obj[2]; struct intel_execution_engine2 *engine; struct drm_i915_gem_relocation_entry *reloc; diff --git a/tests/intel/gem_exec_suspend.c b/tests/intel/gem_exec_suspend.c index de81e1ef1..be78dd476 100644 --- a/tests/intel/gem_exec_suspend.c +++ b/tests/intel/gem_exec_suspend.c @@ -143,7 +143,7 @@ static void test_all(int fd, const intel_ctx_t *ctx, unsigned flags, uint32_t re static void run_test(int fd, const intel_ctx_t *ctx, unsigned engine, unsigned flags, uint32_t region) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const uint32_t bbe = MI_BATCH_BUFFER_END; struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_relocation_entry reloc; diff --git a/tests/intel/gem_exec_whisper.c b/tests/intel/gem_exec_whisper.c index 1d01577b4..00f5ae1b0 100644 --- a/tests/intel/gem_exec_whisper.c +++ b/tests/intel/gem_exec_whisper.c @@ -164,7 +164,7 @@ static void verify_reloc(int fd, uint32_t handle, { if (VERIFY) { uint64_t target = 0; - if (intel_gen(intel_get_drm_devid(fd)) >= 8) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 8) gem_read(fd, handle, reloc->offset, &target, 8); else gem_read(fd, handle, reloc->offset, &target, 4); @@ -203,7 +203,7 @@ static void init_hang(struct hang *h, int fd, const intel_ctx_cfg_t *cfg) h->fd = drm_reopen_driver(fd); igt_allow_hang(h->fd, 0, 0); - gen = intel_gen(intel_get_drm_devid(h->fd)); + gen = intel_gen_legacy(intel_get_drm_devid(h->fd)); if (gem_has_contexts(fd)) { h->ctx = intel_ctx_create(h->fd, cfg); @@ -293,7 +293,7 @@ static void whisper(int fd, const intel_ctx_t *ctx, unsigned engine, unsigned flags) { const uint32_t bbe = MI_BATCH_BUFFER_END; - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const unsigned int ncpus = sysconf(_SC_NPROCESSORS_ONLN); struct drm_i915_gem_exec_object2 batches[QLEN]; struct drm_i915_gem_relocation_entry inter[QLEN]; diff --git a/tests/intel/gem_fenced_exec_thrash.c b/tests/intel/gem_fenced_exec_thrash.c index 59aa32bbd..b84b3fb9d 100644 --- a/tests/intel/gem_fenced_exec_thrash.c +++ b/tests/intel/gem_fenced_exec_thrash.c @@ -217,7 +217,7 @@ int igt_main() run_test(fd, num_fences, 0, flags); } igt_subtest("too-many-fences") - run_test(fd, num_fences + 1, intel_gen(devid) >= 4 ? 0 : ENOBUFS, 0); + run_test(fd, num_fences + 1, intel_gen_legacy(devid) >= 4 ? 0 : ENOBUFS, 0); igt_fixture() drm_close_driver(fd); diff --git a/tests/intel/gem_gtt_hog.c b/tests/intel/gem_gtt_hog.c index c2853665f..6d3718c08 100644 --- a/tests/intel/gem_gtt_hog.c +++ b/tests/intel/gem_gtt_hog.c @@ -177,7 +177,7 @@ int igt_simple_main() data.fd = drm_open_driver(DRIVER_INTEL); data.devid = intel_get_drm_devid(data.fd); - data.intel_gen = intel_gen(data.devid); + data.intel_gen = intel_gen_legacy(data.devid); gettimeofday(&start, NULL); igt_fork(child, ARRAY_SIZE(children)) diff --git a/tests/intel/gem_linear_blits.c b/tests/intel/gem_linear_blits.c index c1733138b..5cbfd8ae6 100644 --- a/tests/intel/gem_linear_blits.c +++ b/tests/intel/gem_linear_blits.c @@ -110,7 +110,7 @@ static void copy(int fd, uint64_t ahnd, uint32_t dst, uint32_t src, batch[i++] = XY_SRC_COPY_BLT_CMD | XY_SRC_COPY_BLT_WRITE_ALPHA | XY_SRC_COPY_BLT_WRITE_RGB; - if (intel_gen(intel_get_drm_devid(fd)) >= 8) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 8) batch[i - 1] |= 8; else batch[i - 1] |= 6; @@ -121,12 +121,12 @@ static void copy(int fd, uint64_t ahnd, uint32_t dst, uint32_t src, batch[i++] = 0; /* dst x1,y1 */ batch[i++] = (HEIGHT << 16) | WIDTH; /* dst x2,y2 */ batch[i++] = obj[0].offset; - if (intel_gen(devid) >= 8) + if (intel_gen_legacy(devid) >= 8) batch[i++] = obj[0].offset >> 32; batch[i++] = 0; /* src x1,y1 */ batch[i++] = WIDTH * 4; batch[i++] = obj[1].offset; - if (intel_gen(devid) >= 8) + if (intel_gen_legacy(devid) >= 8) batch[i++] = obj[1].offset >> 32; batch[i++] = MI_BATCH_BUFFER_END; batch[i++] = MI_NOOP; @@ -160,7 +160,7 @@ static void copy(int fd, uint64_t ahnd, uint32_t dst, uint32_t src, reloc[1].target_handle = src; reloc[1].delta = 0; reloc[1].offset = 7 * sizeof(batch[0]); - if (intel_gen(intel_get_drm_devid(fd)) >= 8) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 8) reloc[1].offset += sizeof(batch[0]); reloc[1].presumed_offset = obj[1].offset; reloc[1].read_domains = I915_GEM_DOMAIN_RENDER; diff --git a/tests/intel/gem_media_vme.c b/tests/intel/gem_media_vme.c index e47f4df21..c5ea8a9b4 100644 --- a/tests/intel/gem_media_vme.c +++ b/tests/intel/gem_media_vme.c @@ -133,7 +133,7 @@ int igt_simple_main() igt_assert(ctx); /* ICL hangs if non-VME enabled slices are enabled with a VME kernel. */ - if (intel_gen(devid) == 11) + if (intel_gen_legacy(devid) == 11) shut_non_vme_subslices(drm_fd, ctx); igt_fork_hang_detector(drm_fd); diff --git a/tests/intel/gem_mmap_gtt.c b/tests/intel/gem_mmap_gtt.c index 51f2a5fee..963709025 100644 --- a/tests/intel/gem_mmap_gtt.c +++ b/tests/intel/gem_mmap_gtt.c @@ -1192,14 +1192,14 @@ test_hang_user(int i915) static int min_tile_width(uint32_t devid, int tiling) { if (tiling < 0) { - if (intel_gen(devid) >= 4) + if (intel_gen_legacy(devid) >= 4) return 4096 - min_tile_width(devid, -tiling); else return 1024; } - if (intel_gen(devid) == 2) + if (intel_gen_legacy(devid) == 2) return 128; else if (tiling == I915_TILING_X) return 512; @@ -1212,15 +1212,15 @@ static int min_tile_width(uint32_t devid, int tiling) static int max_tile_width(uint32_t devid, int tiling) { if (tiling < 0) { - if (intel_gen(devid) >= 4) + if (intel_gen_legacy(devid) >= 4) return 4096 + min_tile_width(devid, -tiling); else return 2048; } - if (intel_gen(devid) >= 7) + if (intel_gen_legacy(devid) >= 7) return 256 << 10; - else if (intel_gen(devid) >= 4) + else if (intel_gen_legacy(devid) >= 4) return 128 << 10; else return 8 << 10; @@ -1268,7 +1268,7 @@ test_huge_bo(int fd, int huge, int tiling) * a quarter size one instead. */ if (tiling && - intel_gen(intel_get_drm_devid(fd)) < 4 && + intel_gen_legacy(intel_get_drm_devid(fd)) < 4 && size >= gem_global_aperture_size(fd) / 2) size /= 2; break; diff --git a/tests/intel/gem_read_read_speed.c b/tests/intel/gem_read_read_speed.c index 965781ddb..65e4d8335 100644 --- a/tests/intel/gem_read_read_speed.c +++ b/tests/intel/gem_read_read_speed.c @@ -255,7 +255,7 @@ int igt_main() igt_require_gem(fd); devid = intel_get_drm_devid(fd); - igt_require(intel_gen(devid) >= 6); + igt_require(intel_gen_legacy(devid) >= 6); rendercopy = igt_get_render_copyfunc(fd); igt_require(rendercopy); diff --git a/tests/intel/gem_render_copy.c b/tests/intel/gem_render_copy.c index 5e7941d5a..b561c9039 100644 --- a/tests/intel/gem_render_copy.c +++ b/tests/intel/gem_render_copy.c @@ -223,7 +223,7 @@ copy_from_linear_buf(data_t *data, struct intel_buf *src, struct intel_buf *dst) static void *linear_copy_ccs(data_t *data, struct intel_buf *buf) { void *ccs_data, *linear; - unsigned int gen = intel_gen(data->devid); + unsigned int gen = intel_gen_legacy(data->devid); int ccs_size = intel_buf_ccs_width(gen, buf) * intel_buf_ccs_height(gen, buf); int buf_size = intel_buf_size(buf); @@ -362,7 +362,7 @@ scratch_buf_check_all(data_t *data, static void scratch_buf_ccs_check(data_t *data, struct intel_buf *buf) { - unsigned int gen = intel_gen(data->devid); + unsigned int gen = intel_gen_legacy(data->devid); int ccs_size = intel_buf_ccs_width(gen, buf) * intel_buf_ccs_height(gen, buf); uint8_t *linear; @@ -460,12 +460,12 @@ static void test(data_t *data, uint32_t src_tiling, uint32_t dst_tiling, dst_compression == I915_COMPRESSION_NONE); /* no Yf before gen9 */ - if (intel_gen(data->devid) < 9) + if (intel_gen_legacy(data->devid) < 9) num_src--; if (src_tiling == I915_TILING_Yf || dst_tiling == I915_TILING_Yf || src_compressed || dst_compressed) - igt_require(intel_gen(data->devid) >= 9); + igt_require(intel_gen_legacy(data->devid) >= 9); ibb = intel_bb_create(data->drm_fd, 4096); diff --git a/tests/intel/gem_ringfill.c b/tests/intel/gem_ringfill.c index 7d0be81de..3d065f81e 100644 --- a/tests/intel/gem_ringfill.c +++ b/tests/intel/gem_ringfill.c @@ -207,7 +207,7 @@ static void setup_execbuf(int fd, const intel_ctx_t *ctx, struct drm_i915_gem_relocation_entry *reloc, unsigned int ring) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const uint32_t bbe = MI_BATCH_BUFFER_END; uint32_t *batch, *b; int i; @@ -428,7 +428,7 @@ int igt_main() igt_require_gem(fd); igt_require(has_lut_handle(fd)); - gen = intel_gen(intel_get_drm_devid(fd)); + gen = intel_gen_legacy(intel_get_drm_devid(fd)); if (gen > 3 && gen < 6) { /* ctg and ilk need secure batches */ igt_device_set_master(fd); master = true; diff --git a/tests/intel/gem_set_tiling_vs_blt.c b/tests/intel/gem_set_tiling_vs_blt.c index ec08e1c13..182b8d6b0 100644 --- a/tests/intel/gem_set_tiling_vs_blt.c +++ b/tests/intel/gem_set_tiling_vs_blt.c @@ -164,7 +164,7 @@ static void do_test(struct buf_ops *bops, uint32_t tiling, unsigned stride, blt_stride = stride; blt_bits = 0; - if (intel_gen(ibb->devid) >= 4 && tiling != I915_TILING_NONE) { + if (intel_gen_legacy(ibb->devid) >= 4 && tiling != I915_TILING_NONE) { blt_stride /= 4; blt_bits = XY_SRC_COPY_BLT_SRC_TILED; } diff --git a/tests/intel/gem_softpin.c b/tests/intel/gem_softpin.c index 7b3fc26de..c703d3687 100644 --- a/tests/intel/gem_softpin.c +++ b/tests/intel/gem_softpin.c @@ -504,7 +504,7 @@ static void test_reverse(int i915) static uint64_t busy_batch(int fd) { - unsigned const int gen = intel_gen(intel_get_drm_devid(fd)); + unsigned const int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const int has_64bit_reloc = gen >= 8; struct drm_i915_gem_execbuffer2 execbuf; struct drm_i915_gem_exec_object2 object[2]; @@ -692,7 +692,7 @@ static void xchg_offset(void *array, unsigned i, unsigned j) enum sleep { NOSLEEP, SUSPEND, HIBERNATE }; static void test_noreloc(int fd, enum sleep sleep, unsigned flags) { - unsigned const int gen = intel_gen(intel_get_drm_devid(fd)); + unsigned const int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const uint32_t size = 4096; const uint32_t bbe = MI_BATCH_BUFFER_END; struct drm_i915_gem_execbuffer2 execbuf; @@ -1021,7 +1021,7 @@ static void submit(int fd, unsigned int gen, static void test_allocator_evict(int fd, const intel_ctx_t *ctx, unsigned ring, int timeout) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_execbuffer2 execbuf; unsigned engines[I915_EXEC_RING_MASK + 1]; volatile uint64_t *shared; diff --git a/tests/intel/gem_streaming_writes.c b/tests/intel/gem_streaming_writes.c index b231bcfef..40f1d3fe4 100644 --- a/tests/intel/gem_streaming_writes.c +++ b/tests/intel/gem_streaming_writes.c @@ -94,7 +94,7 @@ IGT_TEST_DESCRIPTION("Test of streaming writes into active GPU sources"); static void test_streaming(int fd, int mode, int sync) { - const bool has_64bit_addr = intel_gen(intel_get_drm_devid(fd)) >= 8; + const bool has_64bit_addr = intel_gen_legacy(intel_get_drm_devid(fd)) >= 8; const bool do_relocs = gem_has_relocations(fd); struct drm_i915_gem_execbuffer2 execbuf; struct drm_i915_gem_exec_object2 exec[3]; @@ -274,7 +274,7 @@ static void test_streaming(int fd, int mode, int sync) static void test_batch(int fd, int mode, int reverse) { - const bool has_64bit_addr = intel_gen(intel_get_drm_devid(fd)) >= 8; + const bool has_64bit_addr = intel_gen_legacy(intel_get_drm_devid(fd)) >= 8; const bool do_relocs = gem_has_relocations(fd); struct drm_i915_gem_execbuffer2 execbuf; struct drm_i915_gem_exec_object2 exec[3]; diff --git a/tests/intel/gem_sync.c b/tests/intel/gem_sync.c index c6063e8f7..e50ddd303 100644 --- a/tests/intel/gem_sync.c +++ b/tests/intel/gem_sync.c @@ -697,7 +697,7 @@ static void store_ring(int fd, const intel_ctx_t *ctx, unsigned ring, int num_children, int timeout) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct intel_engine_data ied; bool has_relocs = gem_has_relocations(fd); @@ -797,7 +797,7 @@ static void switch_ring(int fd, const intel_ctx_t *ctx, unsigned ring, int num_children, int timeout) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct intel_engine_data ied; bool has_relocs = gem_has_relocations(fd); @@ -981,7 +981,7 @@ static void __store_many(int fd, const intel_ctx_t *ctx, unsigned ring, int timeout, unsigned long *cycles) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const uint32_t bbe = MI_BATCH_BUFFER_END; struct drm_i915_gem_exec_object2 object[2]; struct drm_i915_gem_execbuffer2 execbuf; @@ -1191,7 +1191,7 @@ sync_all(int fd, const intel_ctx_t *ctx, int num_children, int timeout) static void store_all(int fd, const intel_ctx_t *ctx, int num_children, int timeout) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct intel_engine_data ied; bool has_relocs = gem_has_relocations(fd); diff --git a/tests/intel/gem_tiled_fence_blits.c b/tests/intel/gem_tiled_fence_blits.c index 4eb1194c1..36e995763 100644 --- a/tests/intel/gem_tiled_fence_blits.c +++ b/tests/intel/gem_tiled_fence_blits.c @@ -112,7 +112,7 @@ update_batch(int fd, uint32_t bb_handle, struct drm_i915_gem_relocation_entry *reloc, uint64_t dst_offset, uint64_t src_offset) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); const bool has_64b_reloc = gen >= 8; uint32_t *batch; uint32_t pitch; @@ -202,7 +202,7 @@ static void run_test(int fd, int count, uint64_t end) memset(&eb, 0, sizeof(eb)); eb.buffers_ptr = to_user_pointer(obj); eb.buffer_count = ARRAY_SIZE(obj); - if (intel_gen(intel_get_drm_devid(fd)) >= 6) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 6) eb.flags = I915_EXEC_BLT; bo = calloc(count, diff --git a/tests/intel/gem_tiling_max_stride.c b/tests/intel/gem_tiling_max_stride.c index d01c21bca..4690a1de7 100644 --- a/tests/intel/gem_tiling_max_stride.c +++ b/tests/intel/gem_tiling_max_stride.c @@ -86,13 +86,13 @@ int igt_simple_main() devid = intel_get_drm_devid(fd); gem_require_mappable_ggtt(fd); - if (intel_gen(devid) >= 7) { + if (intel_gen_legacy(devid) >= 7) { stride = 256 * 1024; - } else if (intel_gen(devid) >= 4) { + } else if (intel_gen_legacy(devid) >= 4) { stride = 128 * 1024; - } else if (intel_gen(devid) >= 3) { + } else if (intel_gen_legacy(devid) >= 3) { stride = 8 * 1024; - } else if (intel_gen(devid) >= 2) { + } else if (intel_gen_legacy(devid) >= 2) { tile_width = 128; tile_height = 16; stride = 8 * 1024; diff --git a/tests/intel/gem_userptr_blits.c b/tests/intel/gem_userptr_blits.c index d8d227100..344447b43 100644 --- a/tests/intel/gem_userptr_blits.c +++ b/tests/intel/gem_userptr_blits.c @@ -236,7 +236,7 @@ static int copy(int fd, uint32_t dst, uint32_t src) XY_SRC_COPY_BLT_WRITE_ALPHA | XY_SRC_COPY_BLT_WRITE_RGB; - if (intel_gen(devid) >= 8) + if (intel_gen_legacy(devid) >= 8) batch[i - 1] |= 8; else batch[i - 1] |= 6; @@ -247,12 +247,12 @@ static int copy(int fd, uint32_t dst, uint32_t src) batch[i++] = 0; /* dst x1,y1 */ batch[i++] = (HEIGHT << 16) | WIDTH; /* dst x2,y2 */ batch[i++] = lower_32_bits(dst_offset); /* dst reloc*/ - if (intel_gen(devid) >= 8) + if (intel_gen_legacy(devid) >= 8) batch[i++] = upper_32_bits(CANONICAL(dst_offset)); batch[i++] = 0; /* src x1,y1 */ batch[i++] = WIDTH * 4; batch[i++] = lower_32_bits(src_offset); /* src reloc */ - if (intel_gen(devid) >= 8) + if (intel_gen_legacy(devid) >= 8) batch[i++] = upper_32_bits(CANONICAL(src_offset)); batch[i++] = MI_BATCH_BUFFER_END; batch[i++] = MI_NOOP; @@ -286,7 +286,7 @@ static int copy(int fd, uint32_t dst, uint32_t src) reloc[1].target_handle = src; reloc[1].delta = 0; reloc[1].offset = 7 * sizeof(batch[0]); - if (intel_gen(intel_get_drm_devid(fd)) >= 8) + if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 8) reloc[1].offset += sizeof(batch[0]); reloc[1].presumed_offset = 0; reloc[1].read_domains = I915_GEM_DOMAIN_RENDER; @@ -389,7 +389,7 @@ blit(int fd, uint32_t dst, uint32_t src, uint32_t *all_bo, int n_bo) reloc[1].target_handle = src; reloc[1].delta = 0; reloc[1].offset = 7 * sizeof(batch[0]); - if (intel_gen(devid) >= 8) + if (intel_gen_legacy(devid) >= 8) reloc[1].offset += sizeof(batch[0]); reloc[1].presumed_offset = src_offset; reloc[1].read_domains = I915_GEM_DOMAIN_RENDER; @@ -399,7 +399,7 @@ blit(int fd, uint32_t dst, uint32_t src, uint32_t *all_bo, int n_bo) batch[i++] = XY_SRC_COPY_BLT_CMD | XY_SRC_COPY_BLT_WRITE_ALPHA | XY_SRC_COPY_BLT_WRITE_RGB; - if (intel_gen(devid) >= 8) + if (intel_gen_legacy(devid) >= 8) batch[i - 1] |= 8; else batch[i - 1] |= 6; @@ -409,12 +409,12 @@ blit(int fd, uint32_t dst, uint32_t src, uint32_t *all_bo, int n_bo) batch[i++] = 0; /* dst x1,y1 */ batch[i++] = (HEIGHT << 16) | WIDTH; /* dst x2,y2 */ batch[i++] = lower_32_bits(dst_offset); - if (intel_gen(devid) >= 8) + if (intel_gen_legacy(devid) >= 8) batch[i++] = upper_32_bits(CANONICAL(dst_offset)); batch[i++] = 0; /* src x1,y1 */ batch[i++] = WIDTH * 4; batch[i++] = lower_32_bits(src_offset); - if (intel_gen(devid) >= 8) + if (intel_gen_legacy(devid) >= 8) batch[i++] = upper_32_bits(CANONICAL(src_offset)); batch[i++] = MI_BATCH_BUFFER_END; batch[i++] = MI_NOOP; @@ -452,7 +452,7 @@ blit(int fd, uint32_t dst, uint32_t src, uint32_t *all_bo, int n_bo) static void store_dword(int fd, uint32_t target, uint32_t offset, uint32_t value) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_relocation_entry reloc; struct drm_i915_gem_execbuffer2 execbuf; @@ -1420,7 +1420,7 @@ static void store_dword_rand(int i915, const intel_ctx_t *ctx, uint32_t target, uint64_t sz, int count) { - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); struct drm_i915_gem_relocation_entry *reloc; struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_execbuffer2 exec; diff --git a/tests/intel/gem_vm_create.c b/tests/intel/gem_vm_create.c index c30be7fca..67b3f235b 100644 --- a/tests/intel/gem_vm_create.c +++ b/tests/intel/gem_vm_create.c @@ -279,7 +279,7 @@ static void execbuf(int i915) static void write_to_address(int fd, uint32_t ctx, uint64_t addr, uint32_t value) { - const unsigned int gen = intel_gen(intel_get_drm_devid(fd)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(fd)); struct drm_i915_gem_exec_object2 batch = { .handle = gem_create(fd, 4096) }; diff --git a/tests/intel/gem_watchdog.c b/tests/intel/gem_watchdog.c index efa9ebebe..354e9fd3f 100644 --- a/tests/intel/gem_watchdog.c +++ b/tests/intel/gem_watchdog.c @@ -333,7 +333,7 @@ static void delay(int i915, uint64_t addr, uint64_t ns) { - const int use_64b = intel_gen(intel_get_drm_devid(i915)) >= 8; + const int use_64b = intel_gen_legacy(intel_get_drm_devid(i915)) >= 8; const uint32_t base = gem_engine_mmio_base(i915, e->name); #define CS_GPR(x) (base + 0x600 + 8 * (x)) #define RUNTIME (base + 0x3a8) @@ -467,7 +467,7 @@ far_delay(int i915, unsigned long delay, unsigned int target, uint32_t handle = gem_create(i915, 4096); unsigned long count, submit; - igt_require(intel_gen(intel_get_drm_devid(i915)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) >= 8); igt_require(gem_class_can_store_dword(i915, e->class)); fcntl(i915, F_SETFL, fcntl(i915, F_GETFL) | O_NONBLOCK); diff --git a/tests/intel/gem_workarounds.c b/tests/intel/gem_workarounds.c index 07f0a7da6..467c91bdf 100644 --- a/tests/intel/gem_workarounds.c +++ b/tests/intel/gem_workarounds.c @@ -312,7 +312,7 @@ int igt_main() intel_mmio_use_pci_bar(&mmio_data, igt_device_get_pci_device(device)); - gen = intel_gen(intel_get_drm_devid(device)); + gen = intel_gen_legacy(intel_get_drm_devid(device)); fd = igt_debugfs_open(device, "i915_wa_registers", O_RDONLY); file = fdopen(fd, "r"); diff --git a/tests/intel/gen7_exec_parse.c b/tests/intel/gen7_exec_parse.c index b9f5de234..3a0dcada9 100644 --- a/tests/intel/gen7_exec_parse.c +++ b/tests/intel/gen7_exec_parse.c @@ -499,7 +499,7 @@ int igt_main() handle = gem_create(fd, 4096); /* ATM cmd parser only exists on gen7. */ - igt_require(intel_gen(intel_get_drm_devid(fd)) == 7); + igt_require(intel_gen_legacy(intel_get_drm_devid(fd)) == 7); igt_fork_hang_detector(fd); } diff --git a/tests/intel/gen9_exec_parse.c b/tests/intel/gen9_exec_parse.c index 961bf5e46..c5c1f6291 100644 --- a/tests/intel/gen9_exec_parse.c +++ b/tests/intel/gen9_exec_parse.c @@ -1217,7 +1217,7 @@ int igt_main() gem_require_blitter(i915); igt_require(gem_cmdparser_version(i915) >= 10); - igt_require(intel_gen(intel_get_drm_devid(i915)) == 9); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) == 9); handle = gem_create(i915, HANDLE_SIZE); diff --git a/tests/intel/i915_getparams_basic.c b/tests/intel/i915_getparams_basic.c index abd5dd57a..a1ed38ef3 100644 --- a/tests/intel/i915_getparams_basic.c +++ b/tests/intel/i915_getparams_basic.c @@ -89,14 +89,14 @@ subslice_total(void) int ret; ret = getparam(I915_PARAM_SUBSLICE_TOTAL, (int*)&subslice_total); - igt_skip_on_f(ret == -EINVAL && intel_gen(devid), "Interface not supported by kernel\n"); + igt_skip_on_f(ret == -EINVAL && intel_gen_legacy(devid), "Interface not supported by kernel\n"); if (ret) { /* * These devices are not required to implement the * interface. If they do not, -ENODEV must be returned. */ - if ((intel_gen(devid) < 8) || + if ((intel_gen_legacy(devid) < 8) || IS_BROADWELL(devid) || igt_run_in_simulation()) { igt_assert_eq(ret, -ENODEV); @@ -133,7 +133,7 @@ eu_total(void) * These devices are not required to implement the * interface. If they do not, -ENODEV must be returned. */ - if ((intel_gen(devid) < 8) || + if ((intel_gen_legacy(devid) < 8) || IS_BROADWELL(devid) || igt_run_in_simulation()) { igt_assert_eq(ret, -ENODEV); diff --git a/tests/intel/i915_module_load.c b/tests/intel/i915_module_load.c index 26e30a100..23fdfd968 100644 --- a/tests/intel/i915_module_load.c +++ b/tests/intel/i915_module_load.c @@ -77,7 +77,7 @@ IGT_TEST_DESCRIPTION("Tests the i915 module loading."); static void store_all(int i915) { - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); uint32_t engines[I915_EXEC_RING_MASK + 1]; uint32_t batch[16]; uint64_t ahnd, offset, bb_offset; diff --git a/tests/intel/i915_pm_rc6_residency.c b/tests/intel/i915_pm_rc6_residency.c index 346796fb2..7e80913a2 100644 --- a/tests/intel/i915_pm_rc6_residency.c +++ b/tests/intel/i915_pm_rc6_residency.c @@ -311,7 +311,7 @@ static void restore_freq(int sig) static void bg_load(int i915, const intel_ctx_t *ctx, uint64_t engine_flags, unsigned int flags, unsigned long *ctl, unsigned int gt) { - const bool has_execlists = intel_gen(intel_get_drm_devid(i915)) >= 8; + const bool has_execlists = intel_gen_legacy(intel_get_drm_devid(i915)) >= 8; struct sigaction act = { .sa_handler = sighandler }; @@ -392,7 +392,7 @@ static void rc6_idle(int i915, const intel_ctx_t *ctx, uint64_t flags, unsigned { const int64_t duration_ns = 2 * SLEEP_DURATION * (int64_t)NSEC_PER_SEC; const int tolerance = 20; /* Some RC6 is better than none! */ - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); struct { const char *name; unsigned int flags; @@ -500,7 +500,7 @@ static void rc6_fence(int i915, unsigned int gt) { const int64_t duration_ns = SLEEP_DURATION * (int64_t)NSEC_PER_SEC; const int tolerance = 20; /* Some RC6 is better than none! */ - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); const struct intel_execution_engine2 *e; const intel_ctx_t *ctx; struct power_sample sample[2]; diff --git a/tests/intel/i915_pm_rpm.c b/tests/intel/i915_pm_rpm.c index b4da27b21..9df563347 100644 --- a/tests/intel/i915_pm_rpm.c +++ b/tests/intel/i915_pm_rpm.c @@ -736,7 +736,7 @@ static void debugfs_forcewake_user_subtest(void) { int fd, rc; - igt_require(intel_gen(ms_data.devid) >= 6); + igt_require(intel_gen_legacy(ms_data.devid) >= 6); disable_all_screens_and_wait(&ms_data); diff --git a/tests/intel/i915_pm_sseu.c b/tests/intel/i915_pm_sseu.c index 5dd571a45..7838fc8a8 100644 --- a/tests/intel/i915_pm_sseu.c +++ b/tests/intel/i915_pm_sseu.c @@ -300,7 +300,7 @@ gem_init(void) gem.init = 1; gem.devid = intel_get_drm_devid(gem.drm_fd); - gem.gen = intel_gen(gem.devid); + gem.gen = intel_gen_legacy(gem.devid); igt_require_f(gem.gen >= 8, "SSEU power gating only relevant for Gen8+"); diff --git a/tests/intel/kms_ccs.c b/tests/intel/kms_ccs.c index ea23b6581..30f2c9465 100644 --- a/tests/intel/kms_ccs.c +++ b/tests/intel/kms_ccs.c @@ -565,7 +565,7 @@ static void access_flat_ccs_surface(struct igt_fb *fb, bool verify_compression) uint16_t cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; uint8_t uc_mocs = intel_get_uc_mocs_index(fb->fd); uint8_t comp_pat_index = intel_get_pat_idx_wt(fb->fd); - uint32_t region = (intel_gen(intel_get_drm_devid(fb->fd)) >= 20 && + uint32_t region = (intel_gen_legacy(intel_get_drm_devid(fb->fd)) >= 20 && xe_has_vram(fb->fd)) ? REGION_LMEM(0) : REGION_SMEM; struct drm_xe_engine_class_instance inst = { @@ -645,7 +645,7 @@ static void fill_fb_random(int drm_fd, igt_fb_t *fb) igt_assert_eq(0, gem_munmap(map, fb->size)); /* randomize also ccs surface on Xe2 */ - if (intel_gen(intel_get_drm_devid(drm_fd)) >= 20) + if (intel_gen_legacy(intel_get_drm_devid(drm_fd)) >= 20) access_flat_ccs_surface(fb, false); } @@ -1143,10 +1143,10 @@ static void test_output(data_t *data, const int testnum) igt_subtest_with_dynamic_f("%s-%s", tests[testnum].testname, ccs_modifiers[i].str) { if (ccs_modifiers[i].modifier == I915_FORMAT_MOD_4_TILED_BMG_CCS || ccs_modifiers[i].modifier == I915_FORMAT_MOD_4_TILED_LNL_CCS) { - igt_require_f(intel_gen(dev_id) >= 20, + igt_require_f(intel_gen_legacy(dev_id) >= 20, "Xe2 platform needed.\n"); } else { - igt_require_f(intel_gen(dev_id) < 20, + igt_require_f(intel_gen_legacy(dev_id) < 20, "Older than Xe2 platform needed.\n"); } diff --git a/tests/intel/kms_fbcon_fbt.c b/tests/intel/kms_fbcon_fbt.c index 1b9e535eb..edf5c0d1b 100644 --- a/tests/intel/kms_fbcon_fbt.c +++ b/tests/intel/kms_fbcon_fbt.c @@ -179,7 +179,7 @@ static bool fbc_wait_until_update(struct drm_info *drm) * For older GENs FBC is still expected to be disabled as it still * relies on a tiled and fenceable framebuffer to track modifications. */ - if (intel_gen(intel_get_drm_devid(drm->fd)) >= 9) { + if (intel_gen_legacy(intel_get_drm_devid(drm->fd)) >= 9) { if (!fbc_wait_until_enabled(drm->debugfs_fd)) return false; /* diff --git a/tests/intel/kms_frontbuffer_tracking.c b/tests/intel/kms_frontbuffer_tracking.c index 7dd52f982..c8c2ce240 100644 --- a/tests/intel/kms_frontbuffer_tracking.c +++ b/tests/intel/kms_frontbuffer_tracking.c @@ -3062,13 +3062,13 @@ static bool tiling_is_valid(int feature_flags, enum tiling_type tiling) switch (tiling) { case TILING_LINEAR: - return intel_gen(drm.devid) >= 9; + return intel_gen_legacy(drm.devid) >= 9; case TILING_X: return (intel_get_device_info(drm.devid)->display_ver > 29) ? false : true; case TILING_Y: return true; case TILING_4: - return intel_gen(drm.devid) >= 12; + return intel_gen_legacy(drm.devid) >= 12; default: igt_assert(false); return false; @@ -4475,7 +4475,7 @@ int igt_main_args("", long_options, help_str, opt_handler, NULL) igt_require(igt_draw_supports_method(drm.fd, t.method)); if (t.tiling == TILING_Y) { - igt_require(intel_gen(drm.devid) >= 9); + igt_require(intel_gen_legacy(drm.devid) >= 9); igt_require(!intel_get_device_info(drm.devid)->has_4tile); } diff --git a/tests/intel/kms_pipe_stress.c b/tests/intel/kms_pipe_stress.c index 85bff58d6..1ae32d5fd 100644 --- a/tests/intel/kms_pipe_stress.c +++ b/tests/intel/kms_pipe_stress.c @@ -822,7 +822,7 @@ static void prepare_test(struct data *data) create_framebuffers(data); - if (intel_gen(intel_get_drm_devid(data->drm_fd)) > 9) + if (intel_gen_legacy(intel_get_drm_devid(data->drm_fd)) > 9) start_gpu_threads(data); } @@ -830,7 +830,7 @@ static void finish_test(struct data *data) { int i; - if (intel_gen(intel_get_drm_devid(data->drm_fd)) > 9) + if (intel_gen_legacy(intel_get_drm_devid(data->drm_fd)) > 9) stop_gpu_threads(data); /* diff --git a/tests/intel/perf.c b/tests/intel/perf.c index b6b2cce50..76bc8a25b 100644 --- a/tests/intel/perf.c +++ b/tests/intel/perf.c @@ -720,7 +720,7 @@ oa_timestamp_delta(const uint32_t *report1, const uint32_t *report0, enum drm_i915_oa_format format) { - uint32_t width = intel_graphics_ver(devid) >= IP_VER(12, 55) ? 56 : 32; + uint32_t width = intel_graphics_ver_legacy(devid) >= IP_VER(12, 55) ? 56 : 32; return elapsed_delta(oa_timestamp(report1, format), oa_timestamp(report0, format), width); @@ -801,7 +801,7 @@ oa_report_ctx_is_valid(uint32_t *report) return false; /* TODO */ } else if (IS_GEN8(devid)) { return report[0] & (1ul << 25); - } else if (intel_gen(devid) >= 9) { + } else if (intel_gen_legacy(devid) >= 9) { return report[0] & (1ul << 16); } @@ -1045,7 +1045,7 @@ accumulate_reports(struct accumulator *accumulator, uint64_t *deltas = accumulator->deltas; int idx = 0; - if (intel_gen(devid) >= 8) { + if (intel_gen_legacy(devid) >= 8) { /* timestamp */ deltas[idx] += oa_timestamp_delta(end, start, accumulator->format); idx++; @@ -1092,7 +1092,7 @@ accumulator_print(struct accumulator *accumulator, const char *title) int idx = 0; igt_debug("%s:\n", title); - if (intel_gen(devid) >= 8) { + if (intel_gen_legacy(devid) >= 8) { igt_debug("\ttime delta = %"PRIu64"\n", deltas[idx++]); igt_debug("\tclock cycle delta = %"PRIu64"\n", deltas[idx++]); @@ -1731,7 +1731,7 @@ print_reports(uint32_t *oa_report0, uint32_t *oa_report1, int fmt) clock0, clock1, clock1 - clock0); } - if (intel_gen(devid) >= 8) { + if (intel_gen_legacy(devid) >= 8) { uint32_t slice_freq0, slice_freq1, unslice_freq0, unslice_freq1; const char *reason0 = gen8_read_report_reason(oa_report0); const char *reason1 = gen8_read_report_reason(oa_report1); @@ -1834,7 +1834,7 @@ print_report(uint32_t *report, int fmt) igt_debug("CLOCK: %"PRIu64"\n", clock); } - if (intel_gen(devid) >= 8) { + if (intel_gen_legacy(devid) >= 8) { uint32_t slice_freq, unslice_freq; const char *reason = gen8_read_report_reason(report); @@ -2019,7 +2019,7 @@ static void load_helper_init(void) /* MI_STORE_DATA can only use GTT address on gen4+/g33 and needs * snoopable mem on pre-gen6. Hence load-helper only works on gen6+, but * that's also all we care about for the rps testcase*/ - igt_assert(intel_gen(lh.devid) >= 6); + igt_assert(intel_gen_legacy(lh.devid) >= 6); lh.bops = buf_ops_create(drm_fd); @@ -2487,7 +2487,7 @@ test_blocking(uint64_t requested_oa_period, * periodic sampling and we don't want these extra reads to * cause the test to fail... */ - if (intel_gen(devid) >= 8) { + if (intel_gen_legacy(devid) >= 8) { for (int offset = 0; offset < ret; offset += header->size) { header = (void *)(buf + offset); @@ -2672,7 +2672,7 @@ test_polling(uint64_t requested_oa_period, * periodic sampling and we don't want these extra reads to * cause the test to fail... */ - if (intel_gen(devid) >= 8) { + if (intel_gen_legacy(devid) >= 8) { for (int offset = 0; offset < ret; offset += header->size) { header = (void *)(buf + offset); @@ -3659,7 +3659,7 @@ emit_stall_timestamp_and_rpc(struct intel_bb *ibb, intel_bb_add_intel_buf(ibb, dst, true); - if (intel_gen(devid) >= 8) + if (intel_gen_legacy(devid) >= 8) intel_bb_out(ibb, GFX_OP_PIPE_CONTROL(6)); else intel_bb_out(ibb, GFX_OP_PIPE_CONTROL(5)); @@ -4809,7 +4809,7 @@ make_valid_reduced_sseu_config(struct drm_i915_gem_context_param_sseu default_ss { struct drm_i915_gem_context_param_sseu sseu = default_sseu; - if (intel_gen(devid) == 11) { + if (intel_gen_legacy(devid) == 11) { /* * On Gen11 there are restrictions on what subslices * can be disabled, notably we're not able to enable @@ -5173,7 +5173,7 @@ test_create_destroy_userspace_config(void) config.mux_regs_ptr = to_user_pointer(mux_regs); /* Flex EU counters are only available on gen8+ */ - if (intel_gen(devid) >= 8) { + if (intel_gen_legacy(devid) >= 8) { for (i = 0; i < ARRAY_SIZE(flex_regs) / 2; i++) { flex_regs[i * 2] = 0xe458; /* EU_PERF_CNTL0 */ flex_regs[i * 2 + 1] = 0x0; @@ -5252,7 +5252,7 @@ test_whitelisted_registers_userspace_config(void) memset(&config, 0, sizeof(config)); memcpy(config.uuid, uuid, sizeof(config.uuid)); - if (intel_gen(devid) >= 12) { + if (intel_gen_legacy(devid) >= 12) { oa_start_trig1 = 0xd900; oa_start_trig8 = 0xd91c; oa_report_trig1 = 0xd920; @@ -5278,7 +5278,7 @@ test_whitelisted_registers_userspace_config(void) } config.boolean_regs_ptr = (uintptr_t) b_counters_regs; - if (intel_gen(devid) >= 8) { + if (intel_gen_legacy(devid) >= 8) { /* Flex EU registers, only from Gen8+. */ for (i = 0; i < ARRAY_SIZE(flex); i++) { flex_regs[config.n_flex_regs * 2] = flex[i]; @@ -5306,7 +5306,7 @@ test_whitelisted_registers_userspace_config(void) mux_regs[i++] = 0; } - if (intel_gen(devid) >= 8 && !IS_CHERRYVIEW(devid)) { + if (intel_gen_legacy(devid) >= 8 && !IS_CHERRYVIEW(devid)) { /* NOA_CONFIG */ mux_regs[i++] = 0xD04; mux_regs[i++] = 0; @@ -5327,7 +5327,7 @@ test_whitelisted_registers_userspace_config(void) mux_regs[i++] = 0; } - if (intel_gen(devid) <= 11) { + if (intel_gen_legacy(devid) <= 11) { /* HALF_SLICE_CHICKEN2 (shared with kernel workaround) */ mux_regs[i++] = 0xE180; mux_regs[i++] = 0; @@ -5951,7 +5951,7 @@ int igt_main() igt_describe("Test that reason field in OA reports is never 0 on Gen8+"); igt_subtest_with_dynamic("non-zero-reason") { /* Reason field is only available on Gen8+ */ - igt_require(intel_gen(devid) >= 8); + igt_require(intel_gen_legacy(devid) >= 8); __for_random_engine_in_each_group(perf_oa_groups, ctx, e) test_non_zero_reason(e); } @@ -6029,7 +6029,7 @@ int igt_main() test_short_reads(); igt_subtest("mi-rpc") { - igt_require(intel_gen(devid) < 12); + igt_require(intel_gen_legacy(devid) < 12); test_mi_rpc(); } @@ -6048,7 +6048,7 @@ int igt_main() * * For gen12 implement a separate test that uses only OAR */ - igt_require(intel_gen(devid) >= 8 && intel_gen(devid) < 12); + igt_require(intel_gen_legacy(devid) >= 8 && intel_gen_legacy(devid) < 12); igt_require_f(render_copy, "no render-copy function\n"); gen8_test_single_ctx_render_target_writes_a_counter(); } @@ -6056,7 +6056,7 @@ int igt_main() igt_subtest_group() { igt_describe("Test MI REPORT PERF COUNT for Gen 12"); igt_subtest_with_dynamic("gen12-mi-rpc") { - igt_require(intel_gen(devid) >= 12); + igt_require(intel_gen_legacy(devid) >= 12); igt_require(has_class_instance(drm_fd, I915_ENGINE_CLASS_RENDER, 0)); __for_each_render_engine(drm_fd, e) gen12_test_mi_rpc(e); @@ -6064,14 +6064,14 @@ int igt_main() igt_describe("Test OA TLB invalidate"); igt_subtest_with_dynamic("gen12-oa-tlb-invalidate") { - igt_require(intel_gen(devid) >= 12); + igt_require(intel_gen_legacy(devid) >= 12); __for_random_engine_in_each_group(perf_oa_groups, ctx, e) gen12_test_oa_tlb_invalidate(e); } igt_describe("Measure performance for a specific context using OAR in Gen 12"); igt_subtest_with_dynamic("gen12-unprivileged-single-ctx-counters") { - igt_require(intel_gen(devid) >= 12); + igt_require(intel_gen_legacy(devid) >= 12); igt_require(has_class_instance(drm_fd, I915_ENGINE_CLASS_RENDER, 0)); igt_require_f(render_copy, "no render-copy function\n"); __for_each_render_engine(drm_fd, e) @@ -6092,13 +6092,13 @@ int igt_main() */ igt_describe("Verify exclusivity of perf streams with sample oa option"); igt_subtest("gen12-group-exclusive-stream-sample-oa") { - igt_require(intel_gen(devid) >= 12); + igt_require(intel_gen_legacy(devid) >= 12); test_group_exclusive_stream(ctx, true); } igt_describe("Verify exclusivity of perf streams with ctx handle"); igt_subtest("gen12-group-exclusive-stream-ctx-handle") { - igt_require(intel_gen(devid) >= 12); + igt_require(intel_gen_legacy(devid) >= 12); test_group_exclusive_stream(ctx, false); } @@ -6121,7 +6121,7 @@ int igt_main() igt_describe("Verify invalid SSEU opening parameters"); igt_subtest_with_dynamic("global-sseu-config-invalid") { igt_require(i915_perf_revision(drm_fd) >= 4); - igt_require(intel_graphics_ver(devid) < IP_VER(12, 50)); + igt_require(intel_graphics_ver_legacy(devid) < IP_VER(12, 50)); __for_random_engine_in_each_group(perf_oa_groups, ctx, e) test_global_sseu_config_invalid(ctx, e); @@ -6130,7 +6130,7 @@ int igt_main() igt_describe("Verify specifying SSEU opening parameters"); igt_subtest_with_dynamic("global-sseu-config") { igt_require(i915_perf_revision(drm_fd) >= 4); - igt_require(intel_graphics_ver(devid) < IP_VER(12, 50)); + igt_require(intel_graphics_ver_legacy(devid) < IP_VER(12, 50)); __for_random_engine_in_each_group(perf_oa_groups, ctx, e) test_global_sseu_config(ctx, e); diff --git a/tests/intel/perf_pmu.c b/tests/intel/perf_pmu.c index 57113981d..605359bd8 100644 --- a/tests/intel/perf_pmu.c +++ b/tests/intel/perf_pmu.c @@ -237,7 +237,7 @@ init(int gem_fd, const intel_ctx_t *ctx, err = errno; exists = gem_context_has_engine(gem_fd, ctx->id, e->flags); - if (intel_gen(intel_get_drm_devid(gem_fd)) < 6 && + if (intel_gen_legacy(intel_get_drm_devid(gem_fd)) < 6 && sample == I915_SAMPLE_SEMA) exists = false; @@ -742,7 +742,7 @@ sema_wait(int gem_fd, const intel_ctx_t *ctx, uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id); uint64_t obj_offset, bb_offset; - igt_require(intel_gen(intel_get_drm_devid(gem_fd)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(gem_fd)) >= 8); /** * Setup up a batchbuffer with a polling semaphore wait command which @@ -977,7 +977,7 @@ sema_busy(int gem_fd, const intel_ctx_t *ctx, int fd[2]; uint64_t ahnd = get_reloc_ahnd(gem_fd, ctx->id); - igt_require(intel_gen(intel_get_drm_devid(gem_fd)) >= 8); + igt_require(intel_gen_legacy(intel_get_drm_devid(gem_fd)) >= 8); fd[0] = open_group(gem_fd, I915_PMU_ENGINE_SEMA(e->class, e->instance), -1); @@ -1124,7 +1124,7 @@ event_wait(int gem_fd, const intel_ctx_t *ctx, int fd; devid = intel_get_drm_devid(gem_fd); - igt_require(intel_gen(devid) >= 7); + igt_require(intel_gen_legacy(devid) >= 7); igt_require(has_secure_batches(gem_fd)); igt_skip_on(IS_VALLEYVIEW(devid) || IS_CHERRYVIEW(devid)); diff --git a/tests/intel/sysfs_preempt_timeout.c b/tests/intel/sysfs_preempt_timeout.c index 1971b85c2..04d2b8efc 100644 --- a/tests/intel/sysfs_preempt_timeout.c +++ b/tests/intel/sysfs_preempt_timeout.c @@ -286,7 +286,7 @@ static void test_off(int i915, int engine) * GuC submission, but we are not really losing coverage as this test * isn't not a UMD use case. */ - igt_require(intel_gen(intel_get_drm_devid(i915)) < 12); + igt_require(intel_gen_legacy(intel_get_drm_devid(i915)) < 12); igt_assert(igt_sysfs_scanf(engine, "class", "%u", &class) == 1); igt_assert(igt_sysfs_scanf(engine, "instance", "%u", &inst) == 1); diff --git a/tests/intel/sysfs_timeslice_duration.c b/tests/intel/sysfs_timeslice_duration.c index f10a86777..4c19b1d33 100644 --- a/tests/intel/sysfs_timeslice_duration.c +++ b/tests/intel/sysfs_timeslice_duration.c @@ -208,7 +208,7 @@ static uint64_t __test_duration(int i915, int engine, unsigned int timeout) .buffer_count = ARRAY_SIZE(obj), .buffers_ptr = to_user_pointer(obj), }; - const unsigned int gen = intel_gen(intel_get_drm_devid(i915)); + const unsigned int gen = intel_gen_legacy(intel_get_drm_devid(i915)); double duration = clockrate(i915); unsigned int class, inst, mmio; uint32_t *cs, *map; diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c index a21922ee5..914144270 100644 --- a/tests/intel/xe_ccs.c +++ b/tests/intel/xe_ccs.c @@ -128,7 +128,7 @@ static void surf_copy(int xe, int result; igt_assert(mid->compression); - if (intel_gen(devid) >= 20 && mid->compression) { + if (intel_gen_legacy(devid) >= 20 && mid->compression) { comp_pat_index = intel_get_pat_idx_uc_comp(xe); cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; } @@ -177,7 +177,7 @@ static void surf_copy(int xe, if (IS_GEN(devid, 12) && is_intel_dgfx(xe)) { igt_assert(!strcmp(orig, newsum)); igt_assert(!strcmp(orig2, newsum2)); - } else if (intel_gen(devid) >= 20) { + } else if (intel_gen_legacy(devid) >= 20) { if (is_intel_dgfx(xe)) { /* buffer object would become * uncompressed in xe2+ dgfx @@ -227,7 +227,7 @@ static void surf_copy(int xe, * uncompressed in xe2+ dgfx, and therefore retrieve the * ccs by copying 0 to ccsmap */ - if (suspend_resume && intel_gen(devid) >= 20 && is_intel_dgfx(xe)) + if (suspend_resume && intel_gen_legacy(devid) >= 20 && is_intel_dgfx(xe)) memset(ccsmap, 0, ccssize); else /* retrieve back ccs */ @@ -353,7 +353,7 @@ static void block_copy(int xe, uint64_t bb_size = xe_bb_size(xe, SZ_4K); uint64_t ahnd = intel_allocator_open(xe, ctx->vm, INTEL_ALLOCATOR_RELOC); uint32_t run_id = mid_tiling; - uint32_t mid_region = (intel_gen(intel_get_drm_devid(xe)) >= 20 && + uint32_t mid_region = (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && !xe_has_vram(xe)) ? region1 : region2; uint32_t bb; enum blt_compression mid_compression = config->compression; @@ -441,7 +441,7 @@ static void block_copy(int xe, if (config->inplace) { uint8_t pat_index = DEFAULT_PAT_INDEX; - if (intel_gen(intel_get_drm_devid(xe)) >= 20 && config->compression) + if (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && config->compression) pat_index = intel_get_pat_idx_uc_comp(xe); blt_set_object(&blt.dst, mid->handle, dst->size, mid->region, 0, @@ -488,7 +488,7 @@ static void block_multicopy(int xe, uint64_t bb_size = xe_bb_size(xe, SZ_4K); uint64_t ahnd = intel_allocator_open(xe, ctx->vm, INTEL_ALLOCATOR_RELOC); uint32_t run_id = mid_tiling; - uint32_t mid_region = (intel_gen(intel_get_drm_devid(xe)) >= 20 && + uint32_t mid_region = (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && !xe_has_vram(xe)) ? region1 : region2; uint32_t bb; enum blt_compression mid_compression = config->compression; @@ -530,7 +530,7 @@ static void block_multicopy(int xe, if (config->inplace) { uint8_t pat_index = DEFAULT_PAT_INDEX; - if (intel_gen(intel_get_drm_devid(xe)) >= 20 && config->compression) + if (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && config->compression) pat_index = intel_get_pat_idx_uc_comp(xe); blt_set_object(&blt3.dst, mid->handle, dst->size, mid->region, @@ -715,7 +715,7 @@ static void block_copy_test(int xe, int tiling, width, height; - if (intel_gen(dev_id) >= 20 && config->compression) + if (intel_gen_legacy(dev_id) >= 20 && config->compression) igt_require(HAS_FLATCCS(dev_id)); if (config->compression && !blt_block_copy_supports_compression(xe)) diff --git a/tests/intel/xe_compute.c b/tests/intel/xe_compute.c index 310093fc5..7b6c39c77 100644 --- a/tests/intel/xe_compute.c +++ b/tests/intel/xe_compute.c @@ -232,7 +232,7 @@ test_compute_kernel_loop(uint64_t loop_duration) double elapse_time, lower_bound, upper_bound; fd = drm_open_driver(DRIVER_XE); - ip_ver = intel_graphics_ver(intel_get_drm_devid(fd)); + ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); kernels = intel_compute_square_kernels; while (kernels->kernel) { @@ -335,7 +335,7 @@ igt_check_supported_pipeline(void) const struct intel_compute_kernels *kernels; fd = drm_open_driver(DRIVER_XE); - ip_ver = intel_graphics_ver(intel_get_drm_devid(fd)); + ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); kernels = intel_compute_square_kernels; drm_close_driver(fd); @@ -432,7 +432,7 @@ test_eu_busy(uint64_t duration_sec) fd = drm_open_driver(DRIVER_XE); - ip_ver = intel_graphics_ver(intel_get_drm_devid(fd)); + ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); kernels = intel_compute_square_kernels; while (kernels->kernel) { if (ip_ver == kernels->ip_ver) @@ -518,7 +518,7 @@ int igt_main() igt_fixture() { xe = drm_open_driver(DRIVER_XE); sriov_enabled = is_sriov_mode(xe); - ip_ver = intel_graphics_ver(intel_get_drm_devid(xe)); + ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(xe)); igt_store_ccs_mode(ccs_mode, ARRAY_SIZE(ccs_mode)); } diff --git a/tests/intel/xe_debugfs.c b/tests/intel/xe_debugfs.c index 7fc4a3cbe..facb55854 100644 --- a/tests/intel/xe_debugfs.c +++ b/tests/intel/xe_debugfs.c @@ -329,7 +329,7 @@ static void test_info_read(struct xe_device *xe_dev) failed = true; } - if (intel_gen(devid) < 20) { + if (intel_gen_legacy(devid) < 20) { val = -1; switch (config->info[DRM_XE_QUERY_CONFIG_VA_BITS]) { diff --git a/tests/intel/xe_eudebug_online.c b/tests/intel/xe_eudebug_online.c index ff6c5ff19..f64b12b3f 100644 --- a/tests/intel/xe_eudebug_online.c +++ b/tests/intel/xe_eudebug_online.c @@ -402,7 +402,7 @@ static bool intel_gen_needs_resume_wa(int fd) { const uint32_t id = intel_get_drm_devid(fd); - return intel_gen(id) == 12 && intel_graphics_ver(id) < IP_VER(12, 55); + return intel_gen_legacy(id) == 12 && intel_graphics_ver_legacy(id) < IP_VER(12, 55); } static uint64_t eu_ctl_resume(int fd, int debugfd, uint64_t client, @@ -1231,7 +1231,7 @@ static bool intel_gen_has_lockstep_eus(int fd) * excepted into SIP. In this level, the hardware has only one attention * thread bit for units. PVC is the first one without lockstepping. */ - return !(intel_graphics_ver(id) == IP_VER(12, 60) || intel_gen(id) >= 20); + return !(intel_graphics_ver_legacy(id) == IP_VER(12, 60) || intel_gen_legacy(id) >= 20); } static int query_attention_bitmask_size(int fd, int gt) diff --git a/tests/intel/xe_exec_multi_queue.c b/tests/intel/xe_exec_multi_queue.c index 9a0274496..1d416efc9 100644 --- a/tests/intel/xe_exec_multi_queue.c +++ b/tests/intel/xe_exec_multi_queue.c @@ -1047,7 +1047,7 @@ int igt_main() igt_fixture() { fd = drm_open_driver(DRIVER_XE); - igt_require(intel_graphics_ver(intel_get_drm_devid(fd)) >= IP_VER(35, 0)); + igt_require(intel_graphics_ver_legacy(intel_get_drm_devid(fd)) >= IP_VER(35, 0)); } igt_subtest_f("sanity") diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c index 6935fa8aa..498ab42b7 100644 --- a/tests/intel/xe_exec_store.c +++ b/tests/intel/xe_exec_store.c @@ -69,7 +69,7 @@ static void cond_batch(struct data *data, uint64_t addr, int value, data->batch[b++] = sdi_addr; data->batch[b++] = sdi_addr >> 32; - if (intel_graphics_ver(dev_id) >= IP_VER(20, 0)) + if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) data->batch[b++] = MI_MEM_FENCE | MI_WRITE_FENCE; data->batch[b++] = MI_CONDITIONAL_BATCH_BUFFER_END | MI_DO_COMPARE | 5 << 12 | 2; diff --git a/tests/intel/xe_fault_injection.c b/tests/intel/xe_fault_injection.c index f3bfc4b40..8adc5c15a 100644 --- a/tests/intel/xe_fault_injection.c +++ b/tests/intel/xe_fault_injection.c @@ -491,7 +491,7 @@ oa_add_config_fail(int fd, int sysfs, int devid, { char path[512]; uint64_t config_id; -#define SAMPLE_MUX_REG (intel_graphics_ver(devid) >= IP_VER(20, 0) ? \ +#define SAMPLE_MUX_REG (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0) ? \ 0x13000 /* PES* */ : 0x9888 /* NOA_WRITE */) uint32_t mux_regs[] = { SAMPLE_MUX_REG, 0x0 }; diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c index 02de63a3a..5c112351f 100644 --- a/tests/intel/xe_intel_bb.c +++ b/tests/intel/xe_intel_bb.c @@ -710,7 +710,7 @@ static void do_intel_bb_blit(struct buf_ops *bops, int loops, uint32_t tiling) int i, fails = 0, xe = buf_ops_get_fd(bops); /* We'll fix it for gen2/3 later. */ - igt_require(intel_gen(intel_get_drm_devid(xe)) > 3); + igt_require(intel_gen_legacy(intel_get_drm_devid(xe)) > 3); for (i = 0; i < loops; i++) fails += __do_intel_bb_blit(bops, tiling); @@ -881,7 +881,7 @@ static int render(struct buf_ops *bops, uint32_t tiling, uint32_t devid = intel_get_drm_devid(xe); igt_render_copyfunc_t render_copy = NULL; - igt_debug("%s() gen: %d\n", __func__, intel_gen(devid)); + igt_debug("%s() gen: %d\n", __func__, intel_gen_legacy(devid)); ibb = intel_bb_create(xe, PAGE_SIZE); @@ -1041,7 +1041,7 @@ int igt_main_args("dpib", NULL, help_str, opt_handler, NULL) do_intel_bb_blit(bops, 3, I915_TILING_X); igt_subtest("intel-bb-blit-y") { - igt_require(intel_gen(intel_get_drm_devid(xe)) >= 6); + igt_require(intel_gen_legacy(intel_get_drm_devid(xe)) >= 6); do_intel_bb_blit(bops, 3, I915_TILING_Y); } diff --git a/tests/intel/xe_multigpu_svm.c b/tests/intel/xe_multigpu_svm.c index 2ae0b950f..ab800476e 100644 --- a/tests/intel/xe_multigpu_svm.c +++ b/tests/intel/xe_multigpu_svm.c @@ -412,7 +412,7 @@ static void batch_init(int fd, uint32_t vm, uint64_t src_addr, cmd[i++] = upper_32_bits(src_addr); cmd[i++] = lower_32_bits(dst_addr); cmd[i++] = upper_32_bits(dst_addr); - if (intel_graphics_ver(dev_id) >= IP_VER(20, 0)) { + if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) { cmd[i++] = mocs_index << XE2_MEM_COPY_SRC_MOCS_SHIFT | mocs_index; } else { cmd[i++] = mocs_index << GEN12_MEM_COPY_MOCS_SHIFT | mocs_index; diff --git a/tests/intel/xe_oa.c b/tests/intel/xe_oa.c index 927f3f4f2..051f150c2 100644 --- a/tests/intel/xe_oa.c +++ b/tests/intel/xe_oa.c @@ -476,7 +476,7 @@ get_oa_format(enum intel_xe_oa_format_name format) return dg2_oa_formats[format]; else if (IS_METEORLAKE(devid)) return mtl_oa_formats[format]; - else if (intel_graphics_ver(devid) >= IP_VER(20, 0)) + else if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) return lnl_oa_formats[format]; else return gen12_oa_formats[format]; @@ -797,7 +797,7 @@ oa_timestamp_delta(const uint32_t *report1, const uint32_t *report0, enum intel_xe_oa_format_name format) { - uint32_t width = intel_graphics_ver(devid) >= IP_VER(12, 55) ? 56 : 32; + uint32_t width = intel_graphics_ver_legacy(devid) >= IP_VER(12, 55) ? 56 : 32; return elapsed_delta(oa_timestamp(report1, format), oa_timestamp(report0, format), width); @@ -1136,7 +1136,7 @@ static void pec_sanity_check(const u32 *report0, const u32 *report1, static void pec_sanity_check_reports(const u32 *report0, const u32 *report1, struct intel_xe_perf_metric_set *set) { - if (igt_run_in_simulation() || intel_graphics_ver(devid) < IP_VER(20, 0)) { + if (igt_run_in_simulation() || intel_graphics_ver_legacy(devid) < IP_VER(20, 0)) { igt_debug("%s: Skip checking PEC reports in simulation or Xe1\n", __func__); return; } @@ -3407,7 +3407,7 @@ static void single_ctx_helper(const struct drm_xe_oa_unit *oau) } /* FIXME: can we deduce the presence of A26 from get_oa_format(fmt)? */ - if (intel_graphics_ver(devid) >= IP_VER(20, 0)) + if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) goto skip_check; /* Check that this test passed. The test measures the number of 2x2 @@ -3586,7 +3586,7 @@ static bool has_xe_oa_userspace_config(int fd) return errno != EINVAL; } -#define SAMPLE_MUX_REG (intel_graphics_ver(devid) >= IP_VER(20, 0) ? \ +#define SAMPLE_MUX_REG (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0) ? \ 0x13000 /* PES* */ : 0x9888 /* NOA_WRITE */) /** @@ -3841,7 +3841,7 @@ test_whitelisted_registers_userspace_config(void) /* NOA_CONFIG */ /* Prior to Xe2 */ - if (intel_graphics_ver(devid) < IP_VER(20, 0)) { + if (intel_graphics_ver_legacy(devid) < IP_VER(20, 0)) { regs[config.n_regs * 2] = 0xD04; regs[config.n_regs * 2 + 1] = 0; config.n_regs++; @@ -3850,7 +3850,7 @@ test_whitelisted_registers_userspace_config(void) config.n_regs++; } /* Prior to MTLx */ - if (intel_graphics_ver(devid) < IP_VER(12, 70)) { + if (intel_graphics_ver_legacy(devid) < IP_VER(12, 70)) { /* WAIT_FOR_RC6_EXIT */ regs[config.n_regs * 2] = 0x20CC; regs[config.n_regs * 2 + 1] = 0; @@ -3890,7 +3890,7 @@ struct test_perf { #define HAS_OA_MMIO_TRIGGER(__d) \ (IS_DG2(__d) || IS_PONTEVECCHIO(__d) || IS_METEORLAKE(__d) || \ - intel_graphics_ver(devid) >= IP_VER(20, 0)) + intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) static void perf_init_whitelist(void) { @@ -5087,7 +5087,7 @@ int igt_main_args("b:t", long_options, help_str, opt_handler, NULL) sysfs = igt_sysfs_open(drm_fd); /* Currently only run on Xe2+ */ - igt_require(intel_graphics_ver(devid) >= IP_VER(20, 0)); + igt_require(intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)); igt_require(init_sys_info()); @@ -5193,8 +5193,8 @@ int igt_main_args("b:t", long_options, help_str, opt_handler, NULL) test_mi_rpc(oau); igt_subtest_with_dynamic("oa-tlb-invalidate") { - igt_require(intel_graphics_ver(devid) <= IP_VER(12, 70) && - intel_graphics_ver(devid) != IP_VER(12, 60)); + igt_require(intel_graphics_ver_legacy(devid) <= IP_VER(12, 70) && + intel_graphics_ver_legacy(devid) != IP_VER(12, 60)); __for_oa_unit_by_type(DRM_XE_OA_UNIT_TYPE_OAG) test_oa_tlb_invalidate(oau); } diff --git a/tests/intel/xe_pat.c b/tests/intel/xe_pat.c index 24339f688..96302ad3a 100644 --- a/tests/intel/xe_pat.c +++ b/tests/intel/xe_pat.c @@ -126,7 +126,7 @@ static void pat_sanity(int fd) parsed = xe_fetch_pat_sw_config(fd, &pat_sw_config); - if (intel_graphics_ver(dev_id) >= IP_VER(20, 0)) { + if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) { for (int i = 0; i < parsed; i++) { uint32_t pat = pat_sw_config.entries[i].pat; if (pat_sw_config.entries[i].rsvd) @@ -1328,7 +1328,7 @@ int igt_main_args("V", NULL, help_str, opt_handler, NULL) bo_comp_disable_bind(fd); igt_subtest_with_dynamic("pat-index-xelp") { - igt_require(intel_graphics_ver(dev_id) <= IP_VER(12, 55)); + igt_require(intel_graphics_ver_legacy(dev_id) <= IP_VER(12, 55)); subtest_pat_index_modes_with_regions(fd, xelp_pat_index_modes, ARRAY_SIZE(xelp_pat_index_modes)); } @@ -1349,7 +1349,7 @@ int igt_main_args("V", NULL, help_str, opt_handler, NULL) igt_require(intel_get_device_info(dev_id)->graphics_ver >= 20); igt_assert(HAS_FLATCCS(dev_id)); - if (intel_graphics_ver(dev_id) == IP_VER(20, 1)) + if (intel_graphics_ver_legacy(dev_id) == IP_VER(20, 1)) subtest_pat_index_modes_with_regions(fd, bmg_g21_pat_index_modes, ARRAY_SIZE(bmg_g21_pat_index_modes)); else diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c index b6db50b20..318a9994a 100644 --- a/tests/intel/xe_query.c +++ b/tests/intel/xe_query.c @@ -380,7 +380,7 @@ test_query_gt_topology(int fd) } /* sanity check EU type */ - if (IS_PONTEVECCHIO(dev_id) || intel_gen(dev_id) >= 20) { + if (IS_PONTEVECCHIO(dev_id) || intel_gen_legacy(dev_id) >= 20) { igt_assert(topo_types & (1 << DRM_XE_TOPO_SIMD16_EU_PER_DSS)); igt_assert_eq(topo_types & (1 << DRM_XE_TOPO_EU_PER_DSS), 0); } else { diff --git a/tests/intel/xe_render_copy.c b/tests/intel/xe_render_copy.c index 861daec97..0a6ae9ca2 100644 --- a/tests/intel/xe_render_copy.c +++ b/tests/intel/xe_render_copy.c @@ -136,7 +136,7 @@ static int compare_bufs(struct intel_buf *buf1, struct intel_buf *buf2, static bool buf_is_aux_compressed(struct buf_ops *bops, struct intel_buf *buf) { int xe = buf_ops_get_fd(bops); - unsigned int gen = intel_gen(buf_ops_get_devid(bops)); + unsigned int gen = intel_gen_legacy(buf_ops_get_devid(bops)); uint32_t ccs_size; uint8_t *ptr; bool is_compressed = false; diff --git a/tests/prime_vgem.c b/tests/prime_vgem.c index 79963648f..35363b059 100644 --- a/tests/prime_vgem.c +++ b/tests/prime_vgem.c @@ -609,7 +609,7 @@ static void work(int i915, uint64_t ahnd, uint64_t scratch_offset, int dmabuf, { const int SCRATCH = 0; const int BATCH = 1; - const int gen = intel_gen(intel_get_drm_devid(i915)); + const int gen = intel_gen_legacy(intel_get_drm_devid(i915)); struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_relocation_entry store[1024+1]; struct drm_i915_gem_execbuffer2 execbuf; diff --git a/tools/intel_dp_compliance.c b/tools/intel_dp_compliance.c index 31572f6c3..fff2f0163 100644 --- a/tools/intel_dp_compliance.c +++ b/tools/intel_dp_compliance.c @@ -844,7 +844,7 @@ int main(int argc, char **argv) set_termio_mode(); drm_fd = drm_open_driver(DRIVER_ANY); - gen = intel_gen(intel_get_drm_devid(drm_fd)); + gen = intel_gen_legacy(intel_get_drm_devid(drm_fd)); kmstest_set_vt_graphics_mode(); setup_debugfs_files(); diff --git a/tools/intel_error_decode.c b/tools/intel_error_decode.c index 451608826..7ad63ad31 100644 --- a/tools/intel_error_decode.c +++ b/tools/intel_error_decode.c @@ -311,7 +311,7 @@ static void print_bdw_error(unsigned int reg, unsigned int devid) static void print_error(unsigned int reg, unsigned int devid) { - switch (intel_gen(devid)) { + switch (intel_gen_legacy(devid)) { case 8: return print_bdw_error(reg, devid); case 7: return print_ivb_error(reg, devid); case 6: return print_snb_error(reg); @@ -398,7 +398,7 @@ print_fault_reg(unsigned devid, uint32_t reg) const char *engine[] = { "GFX", "MFX0", "MFX1", "VEBX", "BLT", "Unknown", "Unknown", "Unknown" }; - if (intel_gen(devid) < 7) + if (intel_gen_legacy(devid) < 7) return; if (reg & (1 << 0)) @@ -406,13 +406,13 @@ print_fault_reg(unsigned devid, uint32_t reg) else return; - if (intel_gen(devid) < 8) + if (intel_gen_legacy(devid) < 8) printf(" %s Fault (%s)\n", gen7_types[reg >> 1 & 0x3], reg & (1 << 11) ? "GGTT" : "PPGTT"); else printf(" Invalid %s Fault\n", gen8_types[reg >> 1 & 0x3]); - if (intel_gen(devid) < 8) + if (intel_gen_legacy(devid) < 8) printf(" Address 0x%08x\n", reg & ~((1 << 12)-1)); else printf(" Engine %s\n", engine[reg >> 12 & 0x7]); @@ -425,7 +425,7 @@ print_fault_data(unsigned devid, uint32_t data1, uint32_t data0) { uint64_t address; - if (intel_gen(devid) < 8) + if (intel_gen_legacy(devid) < 8) return; address = ((uint64_t)(data0) << 12) | ((uint64_t)data1 & 0xf) << 44; @@ -691,7 +691,7 @@ read_data_file(FILE *file) if (matched == 1) { devid = reg; printf("Detected GEN%i chipset\n", - intel_gen(devid)); + intel_gen_legacy(devid)); decode_ctx = intel_decode_context_alloc(devid); } diff --git a/tools/intel_gtt.c b/tools/intel_gtt.c index 658336d99..0b9ad278e 100644 --- a/tools/intel_gtt.c +++ b/tools/intel_gtt.c @@ -57,7 +57,7 @@ static gen8_gtt_pte_t gen8_gtt_pte(const unsigned i) static uint64_t ingtt(const unsigned offset) { - if (intel_gen(devid) < 8) + if (intel_gen_legacy(devid) < 8) return gen6_gtt_pte(offset/KB(4)); return gen8_gtt_pte(offset/KB(4)); @@ -68,10 +68,10 @@ static uint64_t get_phys(uint32_t pt_offset) uint64_t pae = 0; uint64_t phys = ingtt(pt_offset); - if (intel_gen(devid) < 4 && !IS_G33(devid)) + if (intel_gen_legacy(devid) < 4 && !IS_G33(devid)) return phys & ~0xfff; - switch (intel_gen(devid)) { + switch (intel_gen_legacy(devid)) { case 3: case 4: case 5: @@ -90,7 +90,7 @@ static uint64_t get_phys(uint32_t pt_offset) case 11: case 12: case 20: - if (intel_graphics_ver(devid) >= IP_VER(12, 70)) + if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 70)) phys = phys & 0x3ffffffff000; else phys = phys & 0x7ffffff000; @@ -105,7 +105,7 @@ static uint64_t get_phys(uint32_t pt_offset) static int get_pte_size(void) { - return intel_gen(devid) < 8 ? 4 : 8; + return intel_gen_legacy(devid) < 8 ? 4 : 8; } static void pte_dump(int size, uint32_t offset) { @@ -125,7 +125,7 @@ static void pte_dump(int size, uint32_t offset) { printf("----------------------------------------------------------\n"); for (i = 0; i < entries; i += 4) { - if (intel_gen(devid) < 8) { + if (intel_gen_legacy(devid) < 8) { printf(" 0x%08x | 0x%08x 0x%08x 0x%08x 0x%08x\n", KB(4 * i), gen6_gtt_pte(i + 0), diff --git a/tools/intel_l3_parity.c b/tools/intel_l3_parity.c index 947117d38..aea74c8ed 100644 --- a/tools/intel_l3_parity.c +++ b/tools/intel_l3_parity.c @@ -190,7 +190,7 @@ int main(int argc, char *argv[]) device = drm_open_driver(DRIVER_INTEL); devid = intel_get_drm_devid(device); - if (intel_gen(devid) < 7 || IS_VALLEYVIEW(devid)) + if (intel_gen_legacy(devid) < 7 || IS_VALLEYVIEW(devid)) exit(77); assert(intel_register_access_init(&mmio_data, diff --git a/tools/intel_reg.c b/tools/intel_reg.c index 49afe91c0..2cddabb55 100644 --- a/tools/intel_reg.c +++ b/tools/intel_reg.c @@ -293,7 +293,7 @@ static const struct intel_execution_engine2 *find_engine(const char *name) static int register_srm(struct config *config, struct reg *reg, uint32_t *val_in) { - const int gen = intel_gen(config->devid); + const int gen = intel_gen_legacy(config->devid); const bool r64b = gen >= 8; const uint32_t ctx = 0; struct drm_i915_gem_exec_object2 obj[2]; @@ -386,7 +386,7 @@ static int register_srm(struct config *config, struct reg *reg, static uint32_t mcbar_offset(uint32_t devid) { - return intel_gen(devid) >= 6 ? 0x140000 : 0x10000; + return intel_gen_legacy(devid) >= 6 ? 0x140000 : 0x10000; } static uint8_t vga_read(uint16_t reg, bool mmio) @@ -1114,7 +1114,7 @@ static int get_reg_spec_file(char *buf, size_t buflen, const char *dir, * Third, try file named after gen, e.g. "gen7" for Haswell (which is * technically 7.5 but this is how it works). */ - snprintf(buf, buflen, "%s/gen%d", dir, intel_gen(devid)); + snprintf(buf, buflen, "%s/gen%d", dir, intel_gen_legacy(devid)); if (!access(buf, F_OK)) return 0; diff --git a/tools/intel_reg_decode.c b/tools/intel_reg_decode.c index 5a632e09d..dd58a8dc3 100644 --- a/tools/intel_reg_decode.c +++ b/tools/intel_reg_decode.c @@ -2627,12 +2627,12 @@ static const struct reg_debug gen6_rp_debug_regs[] = { static bool is_hsw_plus(uint32_t devid, uint32_t pch) { - return IS_HASWELL(devid) || intel_gen(devid) >= 8; + return IS_HASWELL(devid) || intel_gen_legacy(devid) >= 8; } static bool is_gen6_plus(uint32_t devid, uint32_t pch) { - return intel_gen(devid) >= 6; + return intel_gen_legacy(devid) >= 6; } static bool is_gen56ivb(uint32_t devid, uint32_t pch) diff --git a/tools/intel_tiling_detect.c b/tools/intel_tiling_detect.c index 951e2eecd..a4882966b 100644 --- a/tools/intel_tiling_detect.c +++ b/tools/intel_tiling_detect.c @@ -222,7 +222,7 @@ static void render(int fd, uint32_t width, uint32_t height, uint32_t tiling) bops = buf_ops_create(fd); - igt_debug("%s() gen: %d\n", __func__, intel_gen(devid)); + igt_debug("%s() gen: %d\n", __func__, intel_gen_legacy(devid)); ibb = intel_bb_create(fd, SZ_4K); diff --git a/tools/intel_vbt_decode.c b/tools/intel_vbt_decode.c index d4aada743..4a0e12212 100644 --- a/tools/intel_vbt_decode.c +++ b/tools/intel_vbt_decode.c @@ -644,7 +644,7 @@ static const char *_to_str(const char * const strings[], static int decode_ssc_freq(struct context *context, bool alternate) { - switch (intel_gen(context->devid)) { + switch (intel_gen_legacy(context->devid)) { case 2: return alternate ? 66 : 48; case 3: -- 2.43.0 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v2 1/3] lib/intel: suffix PCI ID based gen/graphics_ver with _legacy 2026-01-22 7:15 ` [PATCH v2 1/3] lib/intel: suffix PCI ID based gen/graphics_ver with _legacy Xin Wang @ 2026-02-04 18:30 ` Matt Roper 0 siblings, 0 replies; 12+ messages in thread From: Matt Roper @ 2026-02-04 18:30 UTC (permalink / raw) To: Xin Wang Cc: igt-dev, Kamil Konieczny, Zbigniew Kempczyński, Ravi Kumar V On Thu, Jan 22, 2026 at 07:15:28AM +0000, Xin Wang wrote: > Rename the PCI ID translation helpers to intel_gen_legacy() and > intel_graphics_ver_legacy() across callers. > This is a preparatory step for introducing fd-based APIs. Was this [mostly] generated in an automated manner via coccinelle or sed? If so, it would be helpful to put the semantic patch or sed command in the commit message so that the patch is easy for people to re-generate (especially since the maintainers might want to re-generate it right before applying if anything else landed in the meantime that still uses the old names). As a general note, I'd avoid using the suffix "legacy" since it's not really clear what that means to someone who isn't reading these specific mailing list threads. A more descriptive suffix like "_from_pciid" might be a better choice. > > Cc: Kamil Konieczny <kamil.konieczny@linux.intel.com> > Cc: Matt Roper <matthew.d.roper@intel.com> > Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> > Cc: Ravi Kumar V <ravi.kumar.vodapalli@intel.com> > Signed-off-by: Xin Wang <x.wang@intel.com> > --- ...snip... > > /** > - * intel_gen: > + * intel_gen_legacy: > * @devid: pci device id > * > * Computes the Intel GFX generation for the given device id. > @@ -747,12 +747,12 @@ const struct intel_cmds_info *intel_get_cmds_info(uint16_t devid) > * Returns: > * The GFX generation on successful lookup, -1u on failure. I'd add some kind of disclaimer to the function documentation for these indicating that this form is deprecated and should only be used in places where there's no access to the device itself (e.g., DRM driver isn't loaded, or a tool is being run in a different environment). Matt -- Matt Roper Graphics Software Engineer Linux GPU Platform Enablement Intel Corporation ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 2/3] lib/intel: add fd-based intel_gen/intel_graphics_ver via Xe query 2026-01-22 7:15 [PATCH v2 0/3] lib/intel: switch graphics/IP version queries to fd-based APIs Xin Wang 2026-01-22 7:15 ` [PATCH v2 1/3] lib/intel: suffix PCI ID based gen/graphics_ver with _legacy Xin Wang @ 2026-01-22 7:15 ` Xin Wang 2026-02-05 9:09 ` Jani Nikula 2026-01-22 7:15 ` [PATCH v2 3/3] intel/xe: use fd-based graphics/IP version helpers Xin Wang ` (3 subsequent siblings) 5 siblings, 1 reply; 12+ messages in thread From: Xin Wang @ 2026-01-22 7:15 UTC (permalink / raw) To: igt-dev Cc: Xin Wang, Kamil Konieczny, Matt Roper, Zbigniew Kempczyński, Ravi Kumar V Add fd‑based intel_gen() and intel_graphics_ver() that prefer Xe query ip_ver_{major,minor} from the main GT, falling back to PCI ID legacy mapping. Export xe_get_main_gt() to access main GT data from the cached query info. Cc: Kamil Konieczny <kamil.konieczny@linux.intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> Cc: Ravi Kumar V <ravi.kumar.vodapalli@intel.com> Signed-off-by: Xin Wang <x.wang@intel.com> --- lib/intel_chipset.c | 51 +++++++++++++++++++++++++++++++++++++++++++++ lib/intel_chipset.h | 2 ++ lib/xe/xe_query.c | 25 ++++++++++++++++++++++ lib/xe/xe_query.h | 1 + 4 files changed, 79 insertions(+) diff --git a/lib/intel_chipset.c b/lib/intel_chipset.c index 760faede2..3e2d7a19c 100644 --- a/lib/intel_chipset.c +++ b/lib/intel_chipset.c @@ -127,6 +127,57 @@ static uint32_t __i915_get_drm_devid(int fd) return devid; } +/** + * intel_graphics_ver: + * @fd: Open DRM device file descriptor + * + * Returns the graphics/IP version encoded with IP_VER(major, minor). + * + * The function prefers the modern XE path: if a main GT is available via + * xe_get_main_gt() and it reports a non-zero ip_ver_major, the version is + * constructed from main_gt->ip_ver_major and main_gt->ip_ver_minor. + * + * If XE information is unavailable, it falls back to the legacy path by reading + * the DRM device ID with intel_get_drm_devid() and translating it via + * intel_graphics_ver_legacy(). + * + * Return: Encoded IP version (IP_VER(major, minor)). + */ +unsigned intel_graphics_ver(int fd) +{ + uint16_t devid; + const struct drm_xe_gt* main_gt = xe_get_main_gt(fd); + + if (main_gt && main_gt->ip_ver_major) + return IP_VER(main_gt->ip_ver_major, main_gt->ip_ver_minor); + + devid = intel_get_drm_devid(fd); + return intel_graphics_ver_legacy(devid); +} + +/** + * intel_gen: + * @fd: DRM device file descriptor. + * + * Attempts to determine the graphics "generation" by querying the Xe driver for + * the main GT IP version major. If unavailable (e.g., not an Xe device or no IP + * version reported), falls back to retrieving the DRM device ID and mapping it + * to a legacy graphics version. + * + * Return: The graphics generation/major IP version for the device. + */ +unsigned intel_gen(int fd) +{ + uint16_t devid; + const struct drm_xe_gt* main_gt = xe_get_main_gt(fd); + + if (main_gt && main_gt->ip_ver_major) + return main_gt->ip_ver_major; + + devid = intel_get_drm_devid(fd); + return intel_gen_legacy(devid); +} + /** * intel_get_drm_devid: * @fd: open i915/xe drm file descriptor diff --git a/lib/intel_chipset.h b/lib/intel_chipset.h index fb360268d..59edf70ea 100644 --- a/lib/intel_chipset.h +++ b/lib/intel_chipset.h @@ -106,6 +106,8 @@ const struct intel_cmds_info *intel_get_cmds_info(uint16_t devid) __attribute__( unsigned intel_gen_legacy(uint16_t devid) __attribute__((pure)); unsigned intel_graphics_ver_legacy(uint16_t devid) __attribute__((pure)); unsigned intel_display_ver(uint16_t devid) __attribute__((pure)); +unsigned intel_gen(int fd); +unsigned intel_graphics_ver(int fd); extern enum pch_type intel_pch; diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c index 981d76948..742e48c11 100644 --- a/lib/xe/xe_query.c +++ b/lib/xe/xe_query.c @@ -931,6 +931,31 @@ uint16_t xe_tile_get_main_gt_id(int fd, uint8_t tile) return gt_id; } +/** + * xe_get_main_gt: + * @fd: xe device fd + * + * Returns pointer to main GT data structure for given xe device @fd. + */ +const struct drm_xe_gt* xe_get_main_gt(int fd) +{ + struct xe_device *xe_dev; + + xe_dev = find_in_cache(fd); + + if (xe_dev) { + for (int i = 0; i < xe_dev->gt_list->num_gt; i++) { + const struct drm_xe_gt *gt_data = &xe_dev->gt_list->gt_list[i]; + + if (gt_data->type == DRM_XE_QUERY_GT_TYPE_MAIN) { + return gt_data; + } + } + } + + return NULL; +} + /** * xe_hwconfig_lookup_value: * @fd: xe device fd diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h index d7a9f95f9..bbf6bf9b6 100644 --- a/lib/xe/xe_query.h +++ b/lib/xe/xe_query.h @@ -160,6 +160,7 @@ uint32_t xe_hwconfig_lookup_value_u32(int fd, enum intel_hwconfig attribute); void *xe_query_device_may_fail(int fd, uint32_t type, uint32_t *size); int xe_query_pxp_status(int fd); int xe_wait_for_pxp_init(int fd); +const struct drm_xe_gt* xe_get_main_gt(int fd); /** * xe_query_device: -- 2.43.0 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v2 2/3] lib/intel: add fd-based intel_gen/intel_graphics_ver via Xe query 2026-01-22 7:15 ` [PATCH v2 2/3] lib/intel: add fd-based intel_gen/intel_graphics_ver via Xe query Xin Wang @ 2026-02-05 9:09 ` Jani Nikula 0 siblings, 0 replies; 12+ messages in thread From: Jani Nikula @ 2026-02-05 9:09 UTC (permalink / raw) To: Xin Wang, igt-dev Cc: Xin Wang, Kamil Konieczny, Matt Roper, Zbigniew Kempczyński, Ravi Kumar V On Thu, 22 Jan 2026, Xin Wang <x.wang@intel.com> wrote: > +/** > + * intel_gen: > + * @fd: DRM device file descriptor. > + * > + * Attempts to determine the graphics "generation" by querying the Xe driver for > + * the main GT IP version major. If unavailable (e.g., not an Xe device or no IP > + * version reported), falls back to retrieving the DRM device ID and mapping it > + * to a legacy graphics version. > + * > + * Return: The graphics generation/major IP version for the device. > + */ > +unsigned intel_gen(int fd) > +{ > + uint16_t devid; > + const struct drm_xe_gt* main_gt = xe_get_main_gt(fd); > + > + if (main_gt && main_gt->ip_ver_major) > + return main_gt->ip_ver_major; > + > + devid = intel_get_drm_devid(fd); > + return intel_gen_legacy(devid); > +} Why stick with the "gen" naming when the kernel side has completely moved to "ver" naming? Also, you won't get build failures if any remaining (or new one, racing to get merged) call sites pass in devid to this. BR, Jani. -- Jani Nikula, Intel ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 3/3] intel/xe: use fd-based graphics/IP version helpers 2026-01-22 7:15 [PATCH v2 0/3] lib/intel: switch graphics/IP version queries to fd-based APIs Xin Wang 2026-01-22 7:15 ` [PATCH v2 1/3] lib/intel: suffix PCI ID based gen/graphics_ver with _legacy Xin Wang 2026-01-22 7:15 ` [PATCH v2 2/3] lib/intel: add fd-based intel_gen/intel_graphics_ver via Xe query Xin Wang @ 2026-01-22 7:15 ` Xin Wang 2026-02-04 18:56 ` Matt Roper 2026-01-22 8:01 ` ✓ i915.CI.BAT: success for lib/intel: switch graphics/IP version queries to fd-based APIs (rev2) Patchwork ` (2 subsequent siblings) 5 siblings, 1 reply; 12+ messages in thread From: Xin Wang @ 2026-01-22 7:15 UTC (permalink / raw) To: igt-dev Cc: Xin Wang, Kamil Konieczny, Matt Roper, Zbigniew Kempczyński, Ravi Kumar V Switch Xe‑related libraries and tests to use fd‑based intel_gen() and intel_graphics_ver() instead of PCI ID lookups, keeping behavior aligned with Xe IP disaggregation. Cc: Kamil Konieczny <kamil.konieczny@linux.intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> Cc: Ravi Kumar V <ravi.kumar.vodapalli@intel.com> Signed-off-by: Xin Wang <x.wang@intel.com> --- lib/gpgpu_shader.c | 5 ++- lib/gpu_cmds.c | 21 ++++++----- lib/intel_batchbuffer.c | 14 +++----- lib/intel_blt.c | 21 +++++------ lib/intel_blt.h | 2 +- lib/intel_bufops.c | 10 +++--- lib/intel_common.c | 2 +- lib/intel_compute.c | 6 ++-- lib/intel_mocs.c | 48 ++++++++++++++------------ lib/intel_pat.c | 19 +++++----- lib/rendercopy_gen9.c | 22 ++++++------ lib/xe/xe_legacy.c | 2 +- lib/xe/xe_spin.c | 4 +-- lib/xe/xe_sriov_provisioning.c | 4 +-- tests/intel/api_intel_allocator.c | 2 +- tests/intel/kms_ccs.c | 13 +++---- tests/intel/kms_fbcon_fbt.c | 2 +- tests/intel/kms_frontbuffer_tracking.c | 6 ++-- tests/intel/kms_pipe_stress.c | 4 +-- tests/intel/xe_ccs.c | 16 ++++----- tests/intel/xe_compute.c | 8 ++--- tests/intel/xe_copy_basic.c | 6 ++-- tests/intel/xe_debugfs.c | 3 +- tests/intel/xe_eudebug_online.c | 8 ++--- tests/intel/xe_exec_multi_queue.c | 2 +- tests/intel/xe_exec_store.c | 18 ++++------ tests/intel/xe_fault_injection.c | 8 ++--- tests/intel/xe_intel_bb.c | 7 ++-- tests/intel/xe_multigpu_svm.c | 3 +- tests/intel/xe_pat.c | 16 ++++----- tests/intel/xe_query.c | 4 +-- tests/intel/xe_render_copy.c | 2 +- 32 files changed, 135 insertions(+), 173 deletions(-) diff --git a/lib/gpgpu_shader.c b/lib/gpgpu_shader.c index 767bddb7b..09a7f5c5e 100644 --- a/lib/gpgpu_shader.c +++ b/lib/gpgpu_shader.c @@ -274,11 +274,10 @@ void gpgpu_shader_exec(struct intel_bb *ibb, struct gpgpu_shader *gpgpu_shader_create(int fd) { struct gpgpu_shader *shdr = calloc(1, sizeof(struct gpgpu_shader)); - const struct intel_device_info *info; + unsigned ip_ver = intel_graphics_ver(fd); igt_assert(shdr); - info = intel_get_device_info(intel_get_drm_devid(fd)); - shdr->gen_ver = 100 * info->graphics_ver + info->graphics_rel; + shdr->gen_ver = 100 * (ip_ver >> 8) + (ip_ver & 0xff); shdr->max_size = 16 * 4; shdr->code = malloc(4 * shdr->max_size); shdr->labels = igt_map_create(igt_map_hash_32, igt_map_equal_32); diff --git a/lib/gpu_cmds.c b/lib/gpu_cmds.c index ab46fe0de..6842af1ad 100644 --- a/lib/gpu_cmds.c +++ b/lib/gpu_cmds.c @@ -313,14 +313,13 @@ fill_binding_table(struct intel_bb *ibb, struct intel_buf *buf) { uint32_t binding_table_offset; uint32_t *binding_table; - uint32_t devid = intel_get_drm_devid(ibb->fd); intel_bb_ptr_align(ibb, 64); binding_table_offset = intel_bb_offset(ibb); binding_table = intel_bb_ptr(ibb); intel_bb_ptr_add(ibb, 64); - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) { /* * Up until now, SURFACEFORMAT_R8_UNROM was used regardless of the 'bpp' value. * For bpp 32 this results in a surface that is 4x narrower than expected. However @@ -342,13 +341,13 @@ fill_binding_table(struct intel_bb *ibb, struct intel_buf *buf) igt_assert_f(false, "Surface state for bpp = %u not implemented", buf->bpp); - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 50)) { + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(12, 50)) { binding_table[0] = xehp_fill_surface_state(ibb, buf, SURFACEFORMAT_R8_UNORM, 1); - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(9, 0)) { + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(9, 0)) { binding_table[0] = gen9_fill_surface_state(ibb, buf, SURFACEFORMAT_R8_UNORM, 1); - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(8, 0)) { + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(8, 0)) { binding_table[0] = gen8_fill_surface_state(ibb, buf, SURFACEFORMAT_R8_UNORM, 1); } else { @@ -867,7 +866,7 @@ gen_emit_media_object(struct intel_bb *ibb, /* inline data (xoffset, yoffset) */ intel_bb_out(ibb, xoffset); intel_bb_out(ibb, yoffset); - if (intel_gen_legacy(ibb->devid) >= 8 && !IS_CHERRYVIEW(ibb->devid)) + if (intel_gen(ibb->fd) >= 8 && !IS_CHERRYVIEW(ibb->devid)) gen8_emit_media_state_flush(ibb); } @@ -1011,7 +1010,7 @@ void xehp_emit_state_compute_mode(struct intel_bb *ibb, bool vrt) { - uint32_t dword_length = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0); + uint32_t dword_length = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0); intel_bb_out(ibb, XEHP_STATE_COMPUTE_MODE | dword_length); intel_bb_out(ibb, vrt ? (0x10001) << 10 : 0); /* Enable variable number of threads */ @@ -1042,7 +1041,7 @@ xehp_emit_state_base_address(struct intel_bb *ibb) intel_bb_out(ibb, 0); /* stateless data port */ - tmp = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0) ? 0 : BASE_ADDRESS_MODIFY; + tmp = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0) ? 0 : BASE_ADDRESS_MODIFY; intel_bb_out(ibb, 0 | tmp); //dw3 /* surface */ @@ -1068,7 +1067,7 @@ xehp_emit_state_base_address(struct intel_bb *ibb) /* dynamic state buffer size */ intel_bb_out(ibb, ALIGN(ibb->size, 1 << 12) | 1); //dw13 /* indirect object buffer size */ - if (intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0)) //dw14 + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) //dw14 intel_bb_out(ibb, 0); else intel_bb_out(ibb, 0xfffff000 | 1); @@ -1115,7 +1114,7 @@ xehp_emit_compute_walk(struct intel_bb *ibb, else mask = (1 << mask) - 1; - dword_length = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0) ? 0x26 : 0x25; + dword_length = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0) ? 0x26 : 0x25; intel_bb_out(ibb, XEHP_COMPUTE_WALKER | dword_length); intel_bb_out(ibb, 0); /* debug object */ //dw1 @@ -1155,7 +1154,7 @@ xehp_emit_compute_walk(struct intel_bb *ibb, intel_bb_out(ibb, 0); //dw16 intel_bb_out(ibb, 0); //dw17 - if (intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0)) //Xe2:dw18 + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) //Xe2:dw18 intel_bb_out(ibb, 0); /* Interface descriptor data */ for (int i = 0; i < 8; i++) { //dw18-25 (Xe2:dw19-26) diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c index f418e7981..4f52e7b6a 100644 --- a/lib/intel_batchbuffer.c +++ b/lib/intel_batchbuffer.c @@ -329,11 +329,7 @@ void igt_blitter_copy(int fd, uint32_t dst_x, uint32_t dst_y, uint64_t dst_size) { - uint32_t devid; - - devid = intel_get_drm_devid(fd); - - if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 60)) + if (intel_graphics_ver(fd) >= IP_VER(12, 60)) igt_blitter_fast_copy__raw(fd, ahnd, ctx, NULL, src_handle, src_delta, src_stride, src_tiling, @@ -410,7 +406,7 @@ void igt_blitter_src_copy(int fd, uint32_t batch_handle; uint32_t src_pitch, dst_pitch; uint32_t dst_reloc_offset, src_reloc_offset; - uint32_t gen = intel_gen_legacy(intel_get_drm_devid(fd)); + uint32_t gen = intel_gen(fd); uint64_t batch_offset, src_offset, dst_offset; const bool has_64b_reloc = gen >= 8; int i = 0; @@ -669,7 +665,7 @@ igt_render_copyfunc_t igt_get_render_copyfunc(int fd) copy = mtl_render_copyfunc; else if (IS_DG2(devid)) copy = gen12p71_render_copyfunc; - else if (intel_gen_legacy(devid) >= 20) + else if (intel_gen(fd) >= 20) copy = xe2_render_copyfunc; else if (IS_GEN12(devid)) copy = gen12_render_copyfunc; @@ -911,7 +907,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg, igt_assert(ibb); ibb->devid = intel_get_drm_devid(fd); - ibb->gen = intel_gen_legacy(ibb->devid); + ibb->gen = intel_gen(fd); ibb->ctx = ctx; ibb->fd = fd; @@ -1089,7 +1085,7 @@ struct intel_bb *intel_bb_create_with_allocator(int fd, uint32_t ctx, uint32_t v static bool aux_needs_softpin(int fd) { - return intel_gen_legacy(intel_get_drm_devid(fd)) >= 12; + return intel_gen(fd) >= 12; } static bool has_ctx_cfg(struct intel_bb *ibb) diff --git a/lib/intel_blt.c b/lib/intel_blt.c index 673f204b0..7ae04fccd 100644 --- a/lib/intel_blt.c +++ b/lib/intel_blt.c @@ -997,7 +997,7 @@ uint64_t emit_blt_block_copy(int fd, uint64_t bb_pos, bool emit_bbe) { - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); + unsigned int ip_ver = intel_graphics_ver(fd); struct gen12_block_copy_data data = {}; struct gen12_block_copy_data_ext dext = {}; uint64_t dst_offset, src_offset, bb_offset; @@ -1285,7 +1285,7 @@ uint64_t emit_blt_ctrl_surf_copy(int fd, uint64_t bb_pos, bool emit_bbe) { - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); + unsigned int ip_ver = intel_graphics_ver(fd); union ctrl_surf_copy_data data = { }; size_t data_sz; uint64_t dst_offset, src_offset, bb_offset, alignment; @@ -1705,7 +1705,7 @@ uint64_t emit_blt_fast_copy(int fd, uint64_t bb_pos, bool emit_bbe) { - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); + unsigned int ip_ver = intel_graphics_ver(fd); struct gen12_fast_copy_data data = {}; uint64_t dst_offset, src_offset, bb_offset; uint32_t bbe = MI_BATCH_BUFFER_END; @@ -1972,11 +1972,10 @@ void blt_mem_copy_init(int fd, struct blt_mem_copy_data *mem, static void dump_bb_mem_copy_cmd(int fd, struct xe_mem_copy_data *data) { uint32_t *cmd = (uint32_t *) data; - uint32_t devid = intel_get_drm_devid(fd); igt_info("BB details:\n"); - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { igt_info(" dw00: [%08x] <client: 0x%x, opcode: 0x%x, length: %d> " "[copy type: %d, mode: %d]\n", cmd[0], data->dw00.xe2.client, data->dw00.xe2.opcode, @@ -2006,7 +2005,7 @@ static void dump_bb_mem_copy_cmd(int fd, struct xe_mem_copy_data *data) cmd[7], data->dw07.dst_address_lo); igt_info(" dw08: [%08x] dst offset hi (0x%x)\n", cmd[8], data->dw08.dst_address_hi); - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { igt_info(" dw09: [%08x] mocs <dst: 0x%x, src: 0x%x>\n", cmd[9], data->dw09.xe2.dst_mocs, data->dw09.xe2.src_mocs); @@ -2025,7 +2024,6 @@ static uint64_t emit_blt_mem_copy(int fd, uint64_t ahnd, uint64_t dst_offset, src_offset, shift; uint32_t width, height, width_max, height_max, remain; uint32_t bbe = MI_BATCH_BUFFER_END; - uint32_t devid = intel_get_drm_devid(fd); uint8_t *bb; if (mem->mode == MODE_BYTE) { @@ -2049,7 +2047,7 @@ static uint64_t emit_blt_mem_copy(int fd, uint64_t ahnd, width = mem->src.width; height = mem->dst.height; - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { data.dw00.xe2.client = 0x2; data.dw00.xe2.opcode = 0x5a; data.dw00.xe2.length = 8; @@ -2231,7 +2229,6 @@ static void emit_blt_mem_set(int fd, uint64_t ahnd, int b; uint32_t *batch; uint32_t value; - uint32_t devid = intel_get_drm_devid(fd); dst_offset = get_offset_pat_index(ahnd, mem->dst.handle, mem->dst.size, 0, mem->dst.pat_index); @@ -2246,7 +2243,7 @@ static void emit_blt_mem_set(int fd, uint64_t ahnd, batch[b++] = mem->dst.pitch - 1; batch[b++] = dst_offset; batch[b++] = dst_offset << 32; - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) batch[b++] = value | (mem->dst.mocs_index << 3); else batch[b++] = value | mem->dst.mocs_index; @@ -2364,7 +2361,7 @@ blt_create_object(const struct blt_copy_data *blt, uint32_t region, if (create_mapping && region != system_memory(blt->fd)) flags |= DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM; - if (intel_gen_legacy(intel_get_drm_devid(blt->fd)) >= 20 && compression) { + if (intel_gen(blt->fd) >= 20 && compression) { pat_index = intel_get_pat_idx_uc_comp(blt->fd); cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; } @@ -2590,7 +2587,7 @@ void blt_surface_get_flatccs_data(int fd, cpu_caching = __xe_default_cpu_caching(fd, sysmem, 0); ccs_bo_size = ALIGN(ccssize, xe_get_default_alignment(fd)); - if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 20 && obj->compression) { + if (intel_gen(fd) >= 20 && obj->compression) { comp_pat_index = intel_get_pat_idx_uc_comp(fd); cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; } diff --git a/lib/intel_blt.h b/lib/intel_blt.h index a98a34e95..feba94ebb 100644 --- a/lib/intel_blt.h +++ b/lib/intel_blt.h @@ -52,7 +52,7 @@ #include "igt.h" #include "intel_cmds_info.h" -#define CCS_RATIO(fd) (intel_gen_legacy(intel_get_drm_devid(fd)) >= 20 ? 512 : 256) +#define CCS_RATIO(fd) (intel_gen(fd) >= 20 ? 512 : 256) #define GEN12_MEM_COPY_MOCS_SHIFT 25 #define XE2_MEM_COPY_SRC_MOCS_SHIFT 28 #define XE2_MEM_COPY_DST_MOCS_SHIFT 3 diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c index ea3742f1e..a2adbf9ef 100644 --- a/lib/intel_bufops.c +++ b/lib/intel_bufops.c @@ -1063,7 +1063,7 @@ static void __intel_buf_init(struct buf_ops *bops, } else { uint16_t cpu_caching = __xe_default_cpu_caching(bops->fd, region, 0); - if (intel_gen_legacy(bops->devid) >= 20 && compression) + if (intel_gen(bops->fd) >= 20 && compression) cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; bo_size = ALIGN(bo_size, xe_get_default_alignment(bops->fd)); @@ -1106,7 +1106,7 @@ void intel_buf_init(struct buf_ops *bops, uint64_t region; uint8_t pat_index = DEFAULT_PAT_INDEX; - if (compression && intel_gen_legacy(bops->devid) >= 20) + if (compression && intel_gen(bops->fd) >= 20) pat_index = intel_get_pat_idx_uc_comp(bops->fd); region = bops->driver == INTEL_DRIVER_I915 ? I915_SYSTEM_MEMORY : @@ -1132,7 +1132,7 @@ void intel_buf_init_in_region(struct buf_ops *bops, { uint8_t pat_index = DEFAULT_PAT_INDEX; - if (compression && intel_gen_legacy(bops->devid) >= 20) + if (compression && intel_gen(bops->fd) >= 20) pat_index = intel_get_pat_idx_uc_comp(bops->fd); __intel_buf_init(bops, 0, buf, width, height, bpp, alignment, @@ -1203,7 +1203,7 @@ void intel_buf_init_using_handle_and_size(struct buf_ops *bops, igt_assert(handle); igt_assert(size); - if (compression && intel_gen_legacy(bops->devid) >= 20) + if (compression && intel_gen(bops->fd) >= 20) pat_index = intel_get_pat_idx_uc_comp(bops->fd); __intel_buf_init(bops, handle, buf, width, height, bpp, alignment, @@ -1758,7 +1758,7 @@ static struct buf_ops *__buf_ops_create(int fd, bool check_idempotency) igt_assert(bops); devid = intel_get_drm_devid(fd); - generation = intel_gen_legacy(devid); + generation = intel_gen(fd); /* Predefined settings: see intel_device_info? */ for (int i = 0; i < ARRAY_SIZE(buf_ops_arr); i++) { diff --git a/lib/intel_common.c b/lib/intel_common.c index cd1019bfe..407d53f77 100644 --- a/lib/intel_common.c +++ b/lib/intel_common.c @@ -91,7 +91,7 @@ bool is_intel_region_compressible(int fd, uint64_t region) return true; /* Integrated Xe2+ supports compression on system memory */ - if (intel_gen_legacy(devid) >= 20 && !is_dgfx && is_intel_system_region(fd, region)) + if (intel_gen(fd) >= 20 && !is_dgfx && is_intel_system_region(fd, region)) return true; /* Discrete supports compression on vram */ diff --git a/lib/intel_compute.c b/lib/intel_compute.c index 1734c1649..66156d194 100644 --- a/lib/intel_compute.c +++ b/lib/intel_compute.c @@ -2284,7 +2284,7 @@ static bool __run_intel_compute_kernel(int fd, struct user_execenv *user, enum execenv_alloc_prefs alloc_prefs) { - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); + unsigned int ip_ver = intel_graphics_ver(fd); int batch; const struct intel_compute_kernels *kernel_entries = intel_compute_square_kernels, *kernels; enum intel_driver driver = get_intel_driver(fd); @@ -2749,7 +2749,7 @@ static bool __run_intel_compute_kernel_preempt(int fd, bool threadgroup_preemption, enum execenv_alloc_prefs alloc_prefs) { - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); + unsigned int ip_ver = ip_ver = intel_graphics_ver(fd); int batch; const struct intel_compute_kernels *kernel_entries = intel_compute_square_kernels, *kernels; enum intel_driver driver = get_intel_driver(fd); @@ -2803,7 +2803,7 @@ static bool __run_intel_compute_kernel_preempt(int fd, */ bool xe_kernel_preempt_check(int fd, enum xe_compute_preempt_type required_preempt) { - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); + unsigned int ip_ver = ip_ver = intel_graphics_ver(fd); int batch = find_preempt_batch(ip_ver); if (batch < 0) { diff --git a/lib/intel_mocs.c b/lib/intel_mocs.c index f21c2bf09..b9ea43c7c 100644 --- a/lib/intel_mocs.c +++ b/lib/intel_mocs.c @@ -27,8 +27,8 @@ struct drm_intel_mocs_index { static void get_mocs_index(int fd, struct drm_intel_mocs_index *mocs) { - uint16_t devid = intel_get_drm_devid(fd); - unsigned int ip_ver = intel_graphics_ver_legacy(devid); + uint16_t devid; + unsigned int ip_ver = intel_graphics_ver(fd); /* * Gen >= 12 onwards don't have a setting for PTE, @@ -42,26 +42,29 @@ static void get_mocs_index(int fd, struct drm_intel_mocs_index *mocs) mocs->wb_index = 4; mocs->displayable_index = 1; mocs->defer_to_pat_index = 0; - } else if (IS_METEORLAKE(devid)) { - mocs->uc_index = 5; - mocs->wb_index = 1; - mocs->displayable_index = 14; - } else if (IS_DG2(devid)) { - mocs->uc_index = 1; - mocs->wb_index = 3; - mocs->displayable_index = 3; - } else if (IS_DG1(devid)) { - mocs->uc_index = 1; - mocs->wb_index = 5; - mocs->displayable_index = 5; - } else if (ip_ver >= IP_VER(12, 0)) { - mocs->uc_index = 3; - mocs->wb_index = 2; - mocs->displayable_index = 61; } else { - mocs->uc_index = I915_MOCS_PTE; - mocs->wb_index = I915_MOCS_CACHED; - mocs->displayable_index = I915_MOCS_PTE; + devid = intel_get_drm_devid(fd); + if (IS_METEORLAKE(devid)) { + mocs->uc_index = 5; + mocs->wb_index = 1; + mocs->displayable_index = 14; + } else if (IS_DG2(devid)) { + mocs->uc_index = 1; + mocs->wb_index = 3; + mocs->displayable_index = 3; + } else if (IS_DG1(devid)) { + mocs->uc_index = 1; + mocs->wb_index = 5; + mocs->displayable_index = 5; + } else if (ip_ver >= IP_VER(12, 0)) { + mocs->uc_index = 3; + mocs->wb_index = 2; + mocs->displayable_index = 61; + } else { + mocs->uc_index = I915_MOCS_PTE; + mocs->wb_index = I915_MOCS_CACHED; + mocs->displayable_index = I915_MOCS_PTE; + } } } @@ -124,9 +127,8 @@ uint8_t intel_get_displayable_mocs_index(int fd) uint8_t intel_get_defer_to_pat_mocs_index(int fd) { struct drm_intel_mocs_index mocs; - uint16_t dev_id = intel_get_drm_devid(fd); - igt_assert(intel_gen_legacy(dev_id) >= 20); + igt_assert(intel_gen(fd) >= 20); get_mocs_index(fd, &mocs); diff --git a/lib/intel_pat.c b/lib/intel_pat.c index 9a61c2a45..9bb4800b6 100644 --- a/lib/intel_pat.c +++ b/lib/intel_pat.c @@ -96,14 +96,12 @@ int32_t xe_get_pat_sw_config(int drm_fd, struct intel_pat_cache *xe_pat_cache) static void intel_get_pat_idx(int fd, struct intel_pat_cache *pat) { - uint16_t dev_id = intel_get_drm_devid(fd); - - if (intel_graphics_ver_legacy(dev_id) == IP_VER(35, 11)) { + if (intel_graphics_ver(fd) == IP_VER(35, 11)) { pat->uc = 3; pat->wb = 2; pat->max_index = 31; - } else if (intel_get_device_info(dev_id)->graphics_ver == 30 || - intel_get_device_info(dev_id)->graphics_ver == 20) { + } else if (intel_gen(fd) == 30 || + intel_gen(fd) == 20) { pat->uc = 3; pat->wt = 15; /* Compressed + WB-transient */ pat->wb = 2; @@ -111,19 +109,19 @@ static void intel_get_pat_idx(int fd, struct intel_pat_cache *pat) pat->max_index = 31; /* Wa_16023588340: CLOS3 entries at end of table are unusable */ - if (intel_graphics_ver_legacy(dev_id) == IP_VER(20, 1)) + if (intel_graphics_ver(fd) == IP_VER(20, 1)) pat->max_index -= 4; - } else if (IS_METEORLAKE(dev_id)) { + } else if (IS_METEORLAKE(intel_get_drm_devid(fd))) { pat->uc = 2; pat->wt = 1; pat->wb = 3; pat->max_index = 3; - } else if (IS_PONTEVECCHIO(dev_id)) { + } else if (IS_PONTEVECCHIO(intel_get_drm_devid(fd))) { pat->uc = 0; pat->wt = 2; pat->wb = 3; pat->max_index = 7; - } else if (intel_graphics_ver_legacy(dev_id) <= IP_VER(12, 60)) { + } else if (intel_graphics_ver(fd) <= IP_VER(12, 60)) { pat->uc = 3; pat->wt = 2; pat->wb = 0; @@ -152,9 +150,8 @@ uint8_t intel_get_pat_idx_uc(int fd) uint8_t intel_get_pat_idx_uc_comp(int fd) { struct intel_pat_cache pat = {}; - uint16_t dev_id = intel_get_drm_devid(fd); - igt_assert(intel_gen_legacy(dev_id) >= 20); + igt_assert(intel_gen(fd) >= 20); intel_get_pat_idx(fd, &pat); return pat.uc_comp; diff --git a/lib/rendercopy_gen9.c b/lib/rendercopy_gen9.c index 66415212c..0be557a47 100644 --- a/lib/rendercopy_gen9.c +++ b/lib/rendercopy_gen9.c @@ -256,12 +256,12 @@ gen9_bind_buf(struct intel_bb *ibb, const struct intel_buf *buf, int is_dst, if (buf->compression == I915_COMPRESSION_MEDIA) ss->ss7.tgl.media_compression = 1; else if (buf->compression == I915_COMPRESSION_RENDER) { - if (intel_gen_legacy(ibb->devid) >= 20) + if (intel_gen(ibb->fd) >= 20) ss->ss6.aux_mode = 0x0; /* AUX_NONE, unified compression */ else ss->ss6.aux_mode = 0x5; /* AUX_CCS_E */ - if (intel_gen_legacy(ibb->devid) < 12 && buf->ccs[0].stride) { + if (intel_gen(ibb->fd) < 12 && buf->ccs[0].stride) { ss->ss6.aux_pitch = (buf->ccs[0].stride / 128) - 1; address = intel_bb_offset_reloc_with_delta(ibb, buf->handle, @@ -303,7 +303,7 @@ gen9_bind_buf(struct intel_bb *ibb, const struct intel_buf *buf, int is_dst, ss->ss7.dg2.disable_support_for_multi_gpu_partial_writes = 1; ss->ss7.dg2.disable_support_for_multi_gpu_atomics = 1; - if (intel_gen_legacy(ibb->devid) >= 20) + if (intel_gen(ibb->fd) >= 20) ss->ss12.lnl.compression_format = lnl_compression_format(buf); else ss->ss12.dg2.compression_format = dg2_compression_format(buf); @@ -681,7 +681,7 @@ gen9_emit_state_base_address(struct intel_bb *ibb) { /* WaBindlessSurfaceStateModifyEnable:skl,bxt */ /* The length has to be one less if we dont modify bindless state */ - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) + if (intel_gen(ibb->fd) >= 20) intel_bb_out(ibb, GEN4_STATE_BASE_ADDRESS | 20); else intel_bb_out(ibb, GEN4_STATE_BASE_ADDRESS | (19 - 1 - 2)); @@ -726,7 +726,7 @@ gen9_emit_state_base_address(struct intel_bb *ibb) { intel_bb_out(ibb, 0); intel_bb_out(ibb, 0); - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) { + if (intel_gen(ibb->fd) >= 20) { /* Bindless sampler */ intel_bb_out(ibb, 0); intel_bb_out(ibb, 0); @@ -899,7 +899,7 @@ gen9_emit_ds(struct intel_bb *ibb) { static void gen8_emit_wm_hz_op(struct intel_bb *ibb) { - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) { + if (intel_gen(ibb->fd) >= 20) { intel_bb_out(ibb, GEN8_3DSTATE_WM_HZ_OP | (6-2)); intel_bb_out(ibb, 0); } else { @@ -989,7 +989,7 @@ gen8_emit_ps(struct intel_bb *ibb, uint32_t kernel, bool fast_clear) { intel_bb_out(ibb, 0); intel_bb_out(ibb, GEN7_3DSTATE_PS | (12-2)); - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) + if (intel_gen(ibb->fd) >= 20) intel_bb_out(ibb, kernel | 1); else intel_bb_out(ibb, kernel); @@ -1006,7 +1006,7 @@ gen8_emit_ps(struct intel_bb *ibb, uint32_t kernel, bool fast_clear) { intel_bb_out(ibb, (max_threads - 1) << GEN8_3DSTATE_PS_MAX_THREADS_SHIFT | GEN6_3DSTATE_WM_16_DISPATCH_ENABLE | (fast_clear ? GEN8_3DSTATE_FAST_CLEAR_ENABLE : 0)); - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) + if (intel_gen(ibb->fd) >= 20) intel_bb_out(ibb, 6 << GEN6_3DSTATE_WM_DISPATCH_START_GRF_0_SHIFT | GENXE_KERNEL0_POLY_PACK16_FIXED << GENXE_KERNEL0_PACKING_POLICY); else @@ -1061,7 +1061,7 @@ gen9_emit_depth(struct intel_bb *ibb) static void gen7_emit_clear(struct intel_bb *ibb) { - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) + if (intel_gen(ibb->fd) >= 20) return; intel_bb_out(ibb, GEN7_3DSTATE_CLEAR_PARAMS | (3-2)); @@ -1072,7 +1072,7 @@ gen7_emit_clear(struct intel_bb *ibb) { static void gen6_emit_drawing_rectangle(struct intel_bb *ibb, const struct intel_buf *dst) { - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) + if (intel_gen(ibb->fd) >= 20) intel_bb_out(ibb, GENXE2_3DSTATE_DRAWING_RECTANGLE_FAST | (4 - 2)); else intel_bb_out(ibb, GEN4_3DSTATE_DRAWING_RECTANGLE | (4 - 2)); @@ -1266,7 +1266,7 @@ void _gen9_render_op(struct intel_bb *ibb, gen9_emit_state_base_address(ibb); - if (HAS_4TILE(ibb->devid) || intel_gen_legacy(ibb->devid) > 12) { + if (HAS_4TILE(ibb->devid) || intel_gen(ibb->fd) > 12) { intel_bb_out(ibb, GEN4_3DSTATE_BINDING_TABLE_POOL_ALLOC | 2); intel_bb_emit_reloc(ibb, ibb->handle, I915_GEM_DOMAIN_RENDER | I915_GEM_DOMAIN_INSTRUCTION, 0, diff --git a/lib/xe/xe_legacy.c b/lib/xe/xe_legacy.c index 1529ed1cc..c1ce9fa00 100644 --- a/lib/xe/xe_legacy.c +++ b/lib/xe/xe_legacy.c @@ -75,7 +75,7 @@ xe_legacy_test_mode(int fd, struct drm_xe_engine_class_instance *eci, igt_assert_lte(n_exec_queues, MAX_N_EXECQUEUES); if (flags & COMPRESSION) - igt_require(intel_gen_legacy(intel_get_drm_devid(fd)) >= 20); + igt_require(intel_gen(fd) >= 20); if (flags & CLOSE_FD) fd = drm_open_driver(DRIVER_XE); diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c index 36260e3e5..8ca137381 100644 --- a/lib/xe/xe_spin.c +++ b/lib/xe/xe_spin.c @@ -54,7 +54,6 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts) uint64_t pad_addr = opts->addr + offsetof(struct xe_spin, pad); uint64_t timestamp_addr = opts->addr + offsetof(struct xe_spin, timestamp); int b = 0; - uint32_t devid; spin->start = 0; spin->end = 0xffffffff; @@ -166,8 +165,7 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts) spin->batch[b++] = opts->mem_copy->dst_offset; spin->batch[b++] = opts->mem_copy->dst_offset << 32; - devid = intel_get_drm_devid(opts->mem_copy->fd); - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) + if (intel_graphics_ver(opts->mem_copy->fd) >= IP_VER(20, 0)) spin->batch[b++] = opts->mem_copy->src->mocs_index << XE2_MEM_COPY_SRC_MOCS_SHIFT | opts->mem_copy->dst->mocs_index << XE2_MEM_COPY_DST_MOCS_SHIFT; else diff --git a/lib/xe/xe_sriov_provisioning.c b/lib/xe/xe_sriov_provisioning.c index 7b60ccd6c..3d981766c 100644 --- a/lib/xe/xe_sriov_provisioning.c +++ b/lib/xe/xe_sriov_provisioning.c @@ -50,9 +50,7 @@ const char *xe_sriov_shared_res_to_string(enum xe_sriov_shared_res res) static uint64_t get_vfid_mask(int fd) { - uint16_t dev_id = intel_get_drm_devid(fd); - - return (intel_graphics_ver_legacy(dev_id) >= IP_VER(12, 50)) ? + return (intel_graphics_ver(fd) >= IP_VER(12, 50)) ? GGTT_PTE_VFID_MASK : PRE_1250_IP_VER_GGTT_PTE_VFID_MASK; } diff --git a/tests/intel/api_intel_allocator.c b/tests/intel/api_intel_allocator.c index 869e5e9a0..6b1d17da7 100644 --- a/tests/intel/api_intel_allocator.c +++ b/tests/intel/api_intel_allocator.c @@ -625,7 +625,7 @@ static void execbuf_with_allocator(int fd) uint64_t ahnd, sz = 4096, gtt_size; unsigned int flags = EXEC_OBJECT_PINNED; uint32_t *ptr, batch[32], copied; - int gen = intel_gen_legacy(intel_get_drm_devid(fd)); + int gen = intel_gen(fd); int i; const uint32_t magic = 0x900df00d; diff --git a/tests/intel/kms_ccs.c b/tests/intel/kms_ccs.c index 30f2c9465..a0373316a 100644 --- a/tests/intel/kms_ccs.c +++ b/tests/intel/kms_ccs.c @@ -565,7 +565,7 @@ static void access_flat_ccs_surface(struct igt_fb *fb, bool verify_compression) uint16_t cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; uint8_t uc_mocs = intel_get_uc_mocs_index(fb->fd); uint8_t comp_pat_index = intel_get_pat_idx_wt(fb->fd); - uint32_t region = (intel_gen_legacy(intel_get_drm_devid(fb->fd)) >= 20 && + uint32_t region = (intel_gen(fb->fd) >= 20 && xe_has_vram(fb->fd)) ? REGION_LMEM(0) : REGION_SMEM; struct drm_xe_engine_class_instance inst = { @@ -645,7 +645,7 @@ static void fill_fb_random(int drm_fd, igt_fb_t *fb) igt_assert_eq(0, gem_munmap(map, fb->size)); /* randomize also ccs surface on Xe2 */ - if (intel_gen_legacy(intel_get_drm_devid(drm_fd)) >= 20) + if (intel_gen(drm_fd) >= 20) access_flat_ccs_surface(fb, false); } @@ -1125,11 +1125,6 @@ static bool valid_modifier_test(u64 modifier, const enum test_flags flags) static void test_output(data_t *data, const int testnum) { - uint16_t dev_id; - - igt_fixture() - dev_id = intel_get_drm_devid(data->drm_fd); - data->flags = tests[testnum].flags; for (int i = 0; i < ARRAY_SIZE(ccs_modifiers); i++) { @@ -1143,10 +1138,10 @@ static void test_output(data_t *data, const int testnum) igt_subtest_with_dynamic_f("%s-%s", tests[testnum].testname, ccs_modifiers[i].str) { if (ccs_modifiers[i].modifier == I915_FORMAT_MOD_4_TILED_BMG_CCS || ccs_modifiers[i].modifier == I915_FORMAT_MOD_4_TILED_LNL_CCS) { - igt_require_f(intel_gen_legacy(dev_id) >= 20, + igt_require_f(intel_gen(data->drm_fd) >= 20, "Xe2 platform needed.\n"); } else { - igt_require_f(intel_gen_legacy(dev_id) < 20, + igt_require_f(intel_gen(data->drm_fd) < 20, "Older than Xe2 platform needed.\n"); } diff --git a/tests/intel/kms_fbcon_fbt.c b/tests/intel/kms_fbcon_fbt.c index edf5c0d1b..b28961417 100644 --- a/tests/intel/kms_fbcon_fbt.c +++ b/tests/intel/kms_fbcon_fbt.c @@ -179,7 +179,7 @@ static bool fbc_wait_until_update(struct drm_info *drm) * For older GENs FBC is still expected to be disabled as it still * relies on a tiled and fenceable framebuffer to track modifications. */ - if (intel_gen_legacy(intel_get_drm_devid(drm->fd)) >= 9) { + if (intel_gen(drm->fd) >= 9) { if (!fbc_wait_until_enabled(drm->debugfs_fd)) return false; /* diff --git a/tests/intel/kms_frontbuffer_tracking.c b/tests/intel/kms_frontbuffer_tracking.c index c8c2ce240..5b60587db 100644 --- a/tests/intel/kms_frontbuffer_tracking.c +++ b/tests/intel/kms_frontbuffer_tracking.c @@ -3062,13 +3062,13 @@ static bool tiling_is_valid(int feature_flags, enum tiling_type tiling) switch (tiling) { case TILING_LINEAR: - return intel_gen_legacy(drm.devid) >= 9; + return intel_gen(drm.fd) >= 9; case TILING_X: return (intel_get_device_info(drm.devid)->display_ver > 29) ? false : true; case TILING_Y: return true; case TILING_4: - return intel_gen_legacy(drm.devid) >= 12; + return intel_gen(drm.fd) >= 12; default: igt_assert(false); return false; @@ -4475,7 +4475,7 @@ int igt_main_args("", long_options, help_str, opt_handler, NULL) igt_require(igt_draw_supports_method(drm.fd, t.method)); if (t.tiling == TILING_Y) { - igt_require(intel_gen_legacy(drm.devid) >= 9); + igt_require(intel_gen(drm.fd) >= 9); igt_require(!intel_get_device_info(drm.devid)->has_4tile); } diff --git a/tests/intel/kms_pipe_stress.c b/tests/intel/kms_pipe_stress.c index 1ae32d5fd..f8c994d07 100644 --- a/tests/intel/kms_pipe_stress.c +++ b/tests/intel/kms_pipe_stress.c @@ -822,7 +822,7 @@ static void prepare_test(struct data *data) create_framebuffers(data); - if (intel_gen_legacy(intel_get_drm_devid(data->drm_fd)) > 9) + if (intel_gen(data->drm_fd) > 9) start_gpu_threads(data); } @@ -830,7 +830,7 @@ static void finish_test(struct data *data) { int i; - if (intel_gen_legacy(intel_get_drm_devid(data->drm_fd)) > 9) + if (intel_gen(data->drm_fd) > 9) stop_gpu_threads(data); /* diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c index 914144270..0ba8ae48c 100644 --- a/tests/intel/xe_ccs.c +++ b/tests/intel/xe_ccs.c @@ -128,7 +128,7 @@ static void surf_copy(int xe, int result; igt_assert(mid->compression); - if (intel_gen_legacy(devid) >= 20 && mid->compression) { + if (intel_gen(xe) >= 20 && mid->compression) { comp_pat_index = intel_get_pat_idx_uc_comp(xe); cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; } @@ -177,7 +177,7 @@ static void surf_copy(int xe, if (IS_GEN(devid, 12) && is_intel_dgfx(xe)) { igt_assert(!strcmp(orig, newsum)); igt_assert(!strcmp(orig2, newsum2)); - } else if (intel_gen_legacy(devid) >= 20) { + } else if (intel_gen(xe) >= 20) { if (is_intel_dgfx(xe)) { /* buffer object would become * uncompressed in xe2+ dgfx @@ -227,7 +227,7 @@ static void surf_copy(int xe, * uncompressed in xe2+ dgfx, and therefore retrieve the * ccs by copying 0 to ccsmap */ - if (suspend_resume && intel_gen_legacy(devid) >= 20 && is_intel_dgfx(xe)) + if (suspend_resume && intel_gen(xe) >= 20 && is_intel_dgfx(xe)) memset(ccsmap, 0, ccssize); else /* retrieve back ccs */ @@ -353,7 +353,7 @@ static void block_copy(int xe, uint64_t bb_size = xe_bb_size(xe, SZ_4K); uint64_t ahnd = intel_allocator_open(xe, ctx->vm, INTEL_ALLOCATOR_RELOC); uint32_t run_id = mid_tiling; - uint32_t mid_region = (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && + uint32_t mid_region = (intel_gen(xe) >= 20 && !xe_has_vram(xe)) ? region1 : region2; uint32_t bb; enum blt_compression mid_compression = config->compression; @@ -441,7 +441,7 @@ static void block_copy(int xe, if (config->inplace) { uint8_t pat_index = DEFAULT_PAT_INDEX; - if (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && config->compression) + if (intel_gen(xe) >= 20 && config->compression) pat_index = intel_get_pat_idx_uc_comp(xe); blt_set_object(&blt.dst, mid->handle, dst->size, mid->region, 0, @@ -488,7 +488,7 @@ static void block_multicopy(int xe, uint64_t bb_size = xe_bb_size(xe, SZ_4K); uint64_t ahnd = intel_allocator_open(xe, ctx->vm, INTEL_ALLOCATOR_RELOC); uint32_t run_id = mid_tiling; - uint32_t mid_region = (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && + uint32_t mid_region = (intel_gen(xe) >= 20 && !xe_has_vram(xe)) ? region1 : region2; uint32_t bb; enum blt_compression mid_compression = config->compression; @@ -530,7 +530,7 @@ static void block_multicopy(int xe, if (config->inplace) { uint8_t pat_index = DEFAULT_PAT_INDEX; - if (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && config->compression) + if (intel_gen(xe) >= 20 && config->compression) pat_index = intel_get_pat_idx_uc_comp(xe); blt_set_object(&blt3.dst, mid->handle, dst->size, mid->region, @@ -715,7 +715,7 @@ static void block_copy_test(int xe, int tiling, width, height; - if (intel_gen_legacy(dev_id) >= 20 && config->compression) + if (intel_gen(xe) >= 20 && config->compression) igt_require(HAS_FLATCCS(dev_id)); if (config->compression && !blt_block_copy_supports_compression(xe)) diff --git a/tests/intel/xe_compute.c b/tests/intel/xe_compute.c index 7b6c39c77..1cb86920f 100644 --- a/tests/intel/xe_compute.c +++ b/tests/intel/xe_compute.c @@ -232,7 +232,7 @@ test_compute_kernel_loop(uint64_t loop_duration) double elapse_time, lower_bound, upper_bound; fd = drm_open_driver(DRIVER_XE); - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); + ip_ver = intel_graphics_ver(fd); kernels = intel_compute_square_kernels; while (kernels->kernel) { @@ -335,7 +335,7 @@ igt_check_supported_pipeline(void) const struct intel_compute_kernels *kernels; fd = drm_open_driver(DRIVER_XE); - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); + ip_ver = intel_graphics_ver(fd); kernels = intel_compute_square_kernels; drm_close_driver(fd); @@ -432,7 +432,7 @@ test_eu_busy(uint64_t duration_sec) fd = drm_open_driver(DRIVER_XE); - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); + ip_ver = intel_graphics_ver(fd); kernels = intel_compute_square_kernels; while (kernels->kernel) { if (ip_ver == kernels->ip_ver) @@ -518,7 +518,7 @@ int igt_main() igt_fixture() { xe = drm_open_driver(DRIVER_XE); sriov_enabled = is_sriov_mode(xe); - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(xe)); + ip_ver = intel_graphics_ver(xe); igt_store_ccs_mode(ccs_mode, ARRAY_SIZE(ccs_mode)); } diff --git a/tests/intel/xe_copy_basic.c b/tests/intel/xe_copy_basic.c index 55081f938..e37bad746 100644 --- a/tests/intel/xe_copy_basic.c +++ b/tests/intel/xe_copy_basic.c @@ -261,7 +261,6 @@ const char *help_str = int igt_main_args("b", NULL, help_str, opt_handler, NULL) { int fd; - uint16_t dev_id; struct igt_collection *set, *regions; uint32_t region; struct rect linear[] = { { 0, 0xfd, 1, MODE_BYTE }, @@ -275,7 +274,6 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) igt_fixture() { fd = drm_open_driver(DRIVER_XE); - dev_id = intel_get_drm_devid(fd); xe_device_get(fd); set = xe_get_memory_region_set(fd, DRM_XE_MEM_REGION_CLASS_SYSMEM, @@ -295,7 +293,7 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) for (int i = 0; i < ARRAY_SIZE(page); i++) { igt_subtest_f("mem-page-copy-%u", page[i].width) { igt_require(blt_has_mem_copy(fd)); - igt_require(intel_get_device_info(dev_id)->graphics_ver >= 20); + igt_require(intel_gen(fd) >= 20); for_each_variation_r(regions, 1, set) { region = igt_collection_get_value(regions, 0); copy_test(fd, &page[i], MEM_COPY, region); @@ -320,7 +318,7 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) * till 0x3FFFF. */ if (linear[i].width > 0x3ffff && - (intel_get_device_info(dev_id)->graphics_ver < 20)) + (intel_gen(fd) < 20)) igt_skip("Skipping: width exceeds 18-bit limit on gfx_ver < 20\n"); igt_require(blt_has_mem_set(fd)); for_each_variation_r(regions, 1, set) { diff --git a/tests/intel/xe_debugfs.c b/tests/intel/xe_debugfs.c index facb55854..4075b173a 100644 --- a/tests/intel/xe_debugfs.c +++ b/tests/intel/xe_debugfs.c @@ -296,7 +296,6 @@ static void test_tile_dir(struct xe_device *xe_dev, uint8_t tile) */ static void test_info_read(struct xe_device *xe_dev) { - uint16_t devid = intel_get_drm_devid(xe_dev->fd); struct drm_xe_query_config *config; const char *name = "info"; bool failed = false; @@ -329,7 +328,7 @@ static void test_info_read(struct xe_device *xe_dev) failed = true; } - if (intel_gen_legacy(devid) < 20) { + if (intel_gen(xe_dev->fd) < 20) { val = -1; switch (config->info[DRM_XE_QUERY_CONFIG_VA_BITS]) { diff --git a/tests/intel/xe_eudebug_online.c b/tests/intel/xe_eudebug_online.c index f64b12b3f..961cf5afc 100644 --- a/tests/intel/xe_eudebug_online.c +++ b/tests/intel/xe_eudebug_online.c @@ -400,9 +400,7 @@ static uint64_t eu_ctl(int debugfd, uint64_t client, static bool intel_gen_needs_resume_wa(int fd) { - const uint32_t id = intel_get_drm_devid(fd); - - return intel_gen_legacy(id) == 12 && intel_graphics_ver_legacy(id) < IP_VER(12, 55); + return intel_gen(fd) == 12 && intel_graphics_ver(fd) < IP_VER(12, 55); } static uint64_t eu_ctl_resume(int fd, int debugfd, uint64_t client, @@ -1222,8 +1220,6 @@ static void run_online_client(struct xe_eudebug_client *c) static bool intel_gen_has_lockstep_eus(int fd) { - const uint32_t id = intel_get_drm_devid(fd); - /* * Lockstep (or in some parlance, fused) EUs are pair of EUs * that work in sync, supposedly same clock and same control flow. @@ -1231,7 +1227,7 @@ static bool intel_gen_has_lockstep_eus(int fd) * excepted into SIP. In this level, the hardware has only one attention * thread bit for units. PVC is the first one without lockstepping. */ - return !(intel_graphics_ver_legacy(id) == IP_VER(12, 60) || intel_gen_legacy(id) >= 20); + return !(intel_graphics_ver(fd) == IP_VER(12, 60) || intel_gen(fd) >= 20); } static int query_attention_bitmask_size(int fd, int gt) diff --git a/tests/intel/xe_exec_multi_queue.c b/tests/intel/xe_exec_multi_queue.c index 1d416efc9..bf09efcc3 100644 --- a/tests/intel/xe_exec_multi_queue.c +++ b/tests/intel/xe_exec_multi_queue.c @@ -1047,7 +1047,7 @@ int igt_main() igt_fixture() { fd = drm_open_driver(DRIVER_XE); - igt_require(intel_graphics_ver_legacy(intel_get_drm_devid(fd)) >= IP_VER(35, 0)); + igt_require(intel_graphics_ver(fd) >= IP_VER(35, 0)); } igt_subtest_f("sanity") diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c index 498ab42b7..9e6a96aa8 100644 --- a/tests/intel/xe_exec_store.c +++ b/tests/intel/xe_exec_store.c @@ -55,8 +55,7 @@ static void store_dword_batch(struct data *data, uint64_t addr, int value) data->addr = batch_addr; } -static void cond_batch(struct data *data, uint64_t addr, int value, - uint16_t dev_id) +static void cond_batch(int fd, struct data *data, uint64_t addr, int value) { int b; uint64_t batch_offset = (char *)&(data->batch) - (char *)data; @@ -69,7 +68,7 @@ static void cond_batch(struct data *data, uint64_t addr, int value, data->batch[b++] = sdi_addr; data->batch[b++] = sdi_addr >> 32; - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) data->batch[b++] = MI_MEM_FENCE | MI_WRITE_FENCE; data->batch[b++] = MI_CONDITIONAL_BATCH_BUFFER_END | MI_DO_COMPARE | 5 << 12 | 2; @@ -112,8 +111,7 @@ static void persistance_batch(struct data *data, uint64_t addr) * SUBTEST: basic-all * Description: Test to verify store dword on all available engines. */ -static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci, - uint16_t dev_id) +static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci) { struct drm_xe_sync sync[2] = { { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, }, @@ -156,7 +154,7 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc else if (inst_type == COND_BATCH) { /* A random value where it stops at the below value. */ value = 20 + random() % 10; - cond_batch(data, addr, value, dev_id); + cond_batch(fd, data, addr, value); } else igt_assert_f(inst_type < 2, "Entered wrong inst_type.\n"); @@ -416,23 +414,21 @@ int igt_main() { struct drm_xe_engine_class_instance *hwe; int fd; - uint16_t dev_id; struct drm_xe_engine *engine; igt_fixture() { fd = drm_open_driver(DRIVER_XE); xe_device_get(fd); - dev_id = intel_get_drm_devid(fd); } igt_subtest("basic-store") { engine = xe_engine(fd, 1); - basic_inst(fd, STORE, &engine->instance, dev_id); + basic_inst(fd, STORE, &engine->instance); } igt_subtest("basic-cond-batch") { engine = xe_engine(fd, 1); - basic_inst(fd, COND_BATCH, &engine->instance, dev_id); + basic_inst(fd, COND_BATCH, &engine->instance); } igt_subtest_with_dynamic("basic-all") { @@ -441,7 +437,7 @@ int igt_main() xe_engine_class_string(hwe->engine_class), hwe->engine_instance, hwe->gt_id); - basic_inst(fd, STORE, hwe, dev_id); + basic_inst(fd, STORE, hwe); } } diff --git a/tests/intel/xe_fault_injection.c b/tests/intel/xe_fault_injection.c index 8adc5c15a..57c5a5579 100644 --- a/tests/intel/xe_fault_injection.c +++ b/tests/intel/xe_fault_injection.c @@ -486,12 +486,12 @@ vm_bind_fail(int fd, const char pci_slot[], const char function_name[]) * @xe_oa_alloc_regs: xe_oa_alloc_regs */ static void -oa_add_config_fail(int fd, int sysfs, int devid, +oa_add_config_fail(int fd, int sysfs, const char pci_slot[], const char function_name[]) { char path[512]; uint64_t config_id; -#define SAMPLE_MUX_REG (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0) ? \ +#define SAMPLE_MUX_REG (intel_graphics_ver(fd) >= IP_VER(20, 0) ? \ 0x13000 /* PES* */ : 0x9888 /* NOA_WRITE */) uint32_t mux_regs[] = { SAMPLE_MUX_REG, 0x0 }; @@ -557,7 +557,6 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) int fd, sysfs; struct drm_xe_engine_class_instance *hwe; struct fault_injection_params fault_params; - static uint32_t devid; char pci_slot[NAME_MAX]; bool is_vf_device; const struct section { @@ -627,7 +626,6 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) igt_fixture() { igt_require(fail_function_injection_enabled()); fd = drm_open_driver(DRIVER_XE); - devid = intel_get_drm_devid(fd); sysfs = igt_sysfs_open(fd); igt_device_get_pci_slot_name(fd, pci_slot); setup_injection_fault(&default_fault_params); @@ -659,7 +657,7 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) for (const struct section *s = oa_add_config_fail_functions; s->name; s++) igt_subtest_f("oa-add-config-fail-%s", s->name) - oa_add_config_fail(fd, sysfs, devid, pci_slot, s->name); + oa_add_config_fail(fd, sysfs, pci_slot, s->name); igt_fixture() { igt_kmod_unbind("xe", pci_slot); diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c index 5c112351f..e37d00d2c 100644 --- a/tests/intel/xe_intel_bb.c +++ b/tests/intel/xe_intel_bb.c @@ -710,7 +710,7 @@ static void do_intel_bb_blit(struct buf_ops *bops, int loops, uint32_t tiling) int i, fails = 0, xe = buf_ops_get_fd(bops); /* We'll fix it for gen2/3 later. */ - igt_require(intel_gen_legacy(intel_get_drm_devid(xe)) > 3); + igt_require(intel_gen(xe) > 3); for (i = 0; i < loops; i++) fails += __do_intel_bb_blit(bops, tiling); @@ -878,10 +878,9 @@ static int render(struct buf_ops *bops, uint32_t tiling, int xe = buf_ops_get_fd(bops); uint32_t fails = 0; char name[128]; - uint32_t devid = intel_get_drm_devid(xe); igt_render_copyfunc_t render_copy = NULL; - igt_debug("%s() gen: %d\n", __func__, intel_gen_legacy(devid)); + igt_debug("%s() gen: %d\n", __func__, intel_gen(xe)); ibb = intel_bb_create(xe, PAGE_SIZE); @@ -1041,7 +1040,7 @@ int igt_main_args("dpib", NULL, help_str, opt_handler, NULL) do_intel_bb_blit(bops, 3, I915_TILING_X); igt_subtest("intel-bb-blit-y") { - igt_require(intel_gen_legacy(intel_get_drm_devid(xe)) >= 6); + igt_require(intel_gen(xe) >= 6); do_intel_bb_blit(bops, 3, I915_TILING_Y); } diff --git a/tests/intel/xe_multigpu_svm.c b/tests/intel/xe_multigpu_svm.c index ab800476e..2c6f81a10 100644 --- a/tests/intel/xe_multigpu_svm.c +++ b/tests/intel/xe_multigpu_svm.c @@ -396,7 +396,6 @@ static void batch_init(int fd, uint32_t vm, uint64_t src_addr, uint64_t batch_addr; void *batch; uint32_t *cmd; - uint16_t dev_id = intel_get_drm_devid(fd); uint32_t mocs_index = intel_get_uc_mocs_index(fd); int i = 0; @@ -412,7 +411,7 @@ static void batch_init(int fd, uint32_t vm, uint64_t src_addr, cmd[i++] = upper_32_bits(src_addr); cmd[i++] = lower_32_bits(dst_addr); cmd[i++] = upper_32_bits(dst_addr); - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) { + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { cmd[i++] = mocs_index << XE2_MEM_COPY_SRC_MOCS_SHIFT | mocs_index; } else { cmd[i++] = mocs_index << GEN12_MEM_COPY_MOCS_SHIFT | mocs_index; diff --git a/tests/intel/xe_pat.c b/tests/intel/xe_pat.c index 96302ad3a..96d544160 100644 --- a/tests/intel/xe_pat.c +++ b/tests/intel/xe_pat.c @@ -119,14 +119,13 @@ static int xe_fetch_pat_sw_config(int fd, struct intel_pat_cache *pat_sw_config) */ static void pat_sanity(int fd) { - uint16_t dev_id = intel_get_drm_devid(fd); struct intel_pat_cache pat_sw_config = {}; int32_t parsed; bool has_uc_comp = false, has_wt = false; parsed = xe_fetch_pat_sw_config(fd, &pat_sw_config); - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) { + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { for (int i = 0; i < parsed; i++) { uint32_t pat = pat_sw_config.entries[i].pat; if (pat_sw_config.entries[i].rsvd) @@ -898,7 +897,6 @@ static void display_vs_wb_transient(int fd) 3, /* UC (baseline) */ 6, /* L3:XD (uncompressed) */ }; - uint32_t devid = intel_get_drm_devid(fd); igt_render_copyfunc_t render_copy = NULL; igt_crc_t ref_crc = {}, crc = {}; igt_plane_t *primary; @@ -914,7 +912,7 @@ static void display_vs_wb_transient(int fd) int bpp = 32; int i; - igt_require(intel_get_device_info(devid)->graphics_ver >= 20); + igt_require(intel_gen(fd) >= 20); render_copy = igt_get_render_copyfunc(fd); igt_require(render_copy); @@ -1015,10 +1013,8 @@ static uint8_t get_pat_idx_uc(int fd, bool *compressed) static uint8_t get_pat_idx_wt(int fd, bool *compressed) { - uint16_t dev_id = intel_get_drm_devid(fd); - if (compressed) - *compressed = intel_get_device_info(dev_id)->graphics_ver >= 20; + *compressed = intel_gen(fd) >= 20; return intel_get_pat_idx_wt(fd); } @@ -1328,7 +1324,7 @@ int igt_main_args("V", NULL, help_str, opt_handler, NULL) bo_comp_disable_bind(fd); igt_subtest_with_dynamic("pat-index-xelp") { - igt_require(intel_graphics_ver_legacy(dev_id) <= IP_VER(12, 55)); + igt_require(intel_graphics_ver(fd) <= IP_VER(12, 55)); subtest_pat_index_modes_with_regions(fd, xelp_pat_index_modes, ARRAY_SIZE(xelp_pat_index_modes)); } @@ -1346,10 +1342,10 @@ int igt_main_args("V", NULL, help_str, opt_handler, NULL) } igt_subtest_with_dynamic("pat-index-xe2") { - igt_require(intel_get_device_info(dev_id)->graphics_ver >= 20); + igt_require(intel_gen(fd) >= 20); igt_assert(HAS_FLATCCS(dev_id)); - if (intel_graphics_ver_legacy(dev_id) == IP_VER(20, 1)) + if (intel_graphics_ver(fd) == IP_VER(20, 1)) subtest_pat_index_modes_with_regions(fd, bmg_g21_pat_index_modes, ARRAY_SIZE(bmg_g21_pat_index_modes)); else diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c index 318a9994a..ae505a5d7 100644 --- a/tests/intel/xe_query.c +++ b/tests/intel/xe_query.c @@ -380,7 +380,7 @@ test_query_gt_topology(int fd) } /* sanity check EU type */ - if (IS_PONTEVECCHIO(dev_id) || intel_gen_legacy(dev_id) >= 20) { + if (IS_PONTEVECCHIO(dev_id) || intel_gen(fd) >= 20) { igt_assert(topo_types & (1 << DRM_XE_TOPO_SIMD16_EU_PER_DSS)); igt_assert_eq(topo_types & (1 << DRM_XE_TOPO_EU_PER_DSS), 0); } else { @@ -428,7 +428,7 @@ test_query_gt_topology_l3_bank_mask(int fd) } igt_info(" count: %d\n", count); - if (intel_get_device_info(dev_id)->graphics_ver < 20) { + if (intel_gen(fd) < 20) { igt_assert_lt(0, count); } diff --git a/tests/intel/xe_render_copy.c b/tests/intel/xe_render_copy.c index 0a6ae9ca2..a3976b5f1 100644 --- a/tests/intel/xe_render_copy.c +++ b/tests/intel/xe_render_copy.c @@ -136,7 +136,7 @@ static int compare_bufs(struct intel_buf *buf1, struct intel_buf *buf2, static bool buf_is_aux_compressed(struct buf_ops *bops, struct intel_buf *buf) { int xe = buf_ops_get_fd(bops); - unsigned int gen = intel_gen_legacy(buf_ops_get_devid(bops)); + unsigned int gen = intel_gen(buf_ops_get_fd(bops)); uint32_t ccs_size; uint8_t *ptr; bool is_compressed = false; -- 2.43.0 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v2 3/3] intel/xe: use fd-based graphics/IP version helpers 2026-01-22 7:15 ` [PATCH v2 3/3] intel/xe: use fd-based graphics/IP version helpers Xin Wang @ 2026-02-04 18:56 ` Matt Roper 2026-02-25 8:51 ` Wang, X 0 siblings, 1 reply; 12+ messages in thread From: Matt Roper @ 2026-02-04 18:56 UTC (permalink / raw) To: Xin Wang Cc: igt-dev, Kamil Konieczny, Zbigniew Kempczyński, Ravi Kumar V On Thu, Jan 22, 2026 at 07:15:30AM +0000, Xin Wang wrote: > Switch Xe‑related libraries and tests to use fd‑based intel_gen() and > intel_graphics_ver() instead of PCI ID lookups, keeping behavior aligned > with Xe IP disaggregation. You might want to mention the specific special cases that aren't transitioned over and will remain on pciid-based lookup so that reviewers can grep the resulting tree and make sure nothing was missed. I just did a grep and it seems like there are still quite a few tests using the pciid-based lookup which probably don't need to be; those might be oversights: $ grep -Irl intel_gen_legacy tests/ tests/prime_vgem.c tests/intel/gem_exec_fair.c tests/intel/gen7_exec_parse.c tests/intel/gem_linear_blits.c tests/intel/gem_evict_alignment.c tests/intel/gem_exec_store.c tests/intel/gem_exec_flush.c tests/intel/i915_getparams_basic.c tests/intel/i915_pm_rpm.c tests/intel/gem_mmap_gtt.c tests/intel/gem_softpin.c tests/intel/gem_sync.c tests/intel/gem_tiled_fence_blits.c tests/intel/gem_close_race.c tests/intel/gem_tiling_max_stride.c tests/intel/gem_ctx_isolation.c tests/intel/gem_exec_nop.c tests/intel/gem_evict_everything.c tests/intel/perf_pmu.c tests/intel/sysfs_timeslice_duration.c tests/intel/gem_ctx_shared.c tests/intel/gem_ctx_engines.c tests/intel/gem_exec_fence.c tests/intel/gem_exec_balancer.c tests/intel/gem_exec_latency.c tests/intel/gem_exec_schedule.c tests/intel/gem_gtt_hog.c tests/intel/gem_blits.c tests/intel/gem_exec_await.c tests/intel/gem_exec_capture.c tests/intel/gem_ringfill.c tests/intel/perf.c tests/intel/gem_exec_params.c tests/intel/sysfs_preempt_timeout.c tests/intel/gem_exec_suspend.c tests/intel/gem_exec_reloc.c tests/intel/gem_exec_whisper.c tests/intel/gem_exec_gttfill.c tests/intel/gem_exec_parallel.c tests/intel/gem_watchdog.c tests/intel/gem_exec_big.c tests/intel/gem_set_tiling_vs_blt.c tests/intel/gem_render_copy.c tests/intel/gen9_exec_parse.c tests/intel/gem_vm_create.c tests/intel/i915_pm_rc6_residency.c tests/intel/i915_module_load.c tests/intel/gem_streaming_writes.c tests/intel/gem_fenced_exec_thrash.c tests/intel/gem_workarounds.c tests/intel/gem_ctx_create.c tests/intel/i915_pm_sseu.c tests/intel/gem_concurrent_all.c tests/intel/gem_ctx_sseu.c tests/intel/gem_read_read_speed.c tests/intel/api_intel_bb.c tests/intel/gem_bad_reloc.c tests/intel/gem_media_vme.c tests/intel/gem_exec_async.c tests/intel/gem_userptr_blits.c tests/intel/gem_eio.c An alternate approach would be to structure this series as: - Create the "legacy" functions as a duplicate of the existing pciid-based functions and explicitly convert the special cases that we expect to remain on PCI ID. - Change the signature of intel_graphics_ver / intel_gen and all remaining callsites. That will ensure everything gets converted over (otherwise there will be a build failure because anything not converted will be trying to use the wrong function signature). It also makes it a little bit easier to directly review the special cases and make sure they all truly need to be special cases. As mentioned on the first patch, if you're using something like Coccinelle to do these conversions, providing the semantic patch(es) used in the commit message would be helpful. Matt > > Cc: Kamil Konieczny <kamil.konieczny@linux.intel.com> > Cc: Matt Roper <matthew.d.roper@intel.com> > Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> > Cc: Ravi Kumar V <ravi.kumar.vodapalli@intel.com> > Signed-off-by: Xin Wang <x.wang@intel.com> > --- > lib/gpgpu_shader.c | 5 ++- > lib/gpu_cmds.c | 21 ++++++----- > lib/intel_batchbuffer.c | 14 +++----- > lib/intel_blt.c | 21 +++++------ > lib/intel_blt.h | 2 +- > lib/intel_bufops.c | 10 +++--- > lib/intel_common.c | 2 +- > lib/intel_compute.c | 6 ++-- > lib/intel_mocs.c | 48 ++++++++++++++------------ > lib/intel_pat.c | 19 +++++----- > lib/rendercopy_gen9.c | 22 ++++++------ > lib/xe/xe_legacy.c | 2 +- > lib/xe/xe_spin.c | 4 +-- > lib/xe/xe_sriov_provisioning.c | 4 +-- > tests/intel/api_intel_allocator.c | 2 +- > tests/intel/kms_ccs.c | 13 +++---- > tests/intel/kms_fbcon_fbt.c | 2 +- > tests/intel/kms_frontbuffer_tracking.c | 6 ++-- > tests/intel/kms_pipe_stress.c | 4 +-- > tests/intel/xe_ccs.c | 16 ++++----- > tests/intel/xe_compute.c | 8 ++--- > tests/intel/xe_copy_basic.c | 6 ++-- > tests/intel/xe_debugfs.c | 3 +- > tests/intel/xe_eudebug_online.c | 8 ++--- > tests/intel/xe_exec_multi_queue.c | 2 +- > tests/intel/xe_exec_store.c | 18 ++++------ > tests/intel/xe_fault_injection.c | 8 ++--- > tests/intel/xe_intel_bb.c | 7 ++-- > tests/intel/xe_multigpu_svm.c | 3 +- > tests/intel/xe_pat.c | 16 ++++----- > tests/intel/xe_query.c | 4 +-- > tests/intel/xe_render_copy.c | 2 +- > 32 files changed, 135 insertions(+), 173 deletions(-) > > diff --git a/lib/gpgpu_shader.c b/lib/gpgpu_shader.c > index 767bddb7b..09a7f5c5e 100644 > --- a/lib/gpgpu_shader.c > +++ b/lib/gpgpu_shader.c > @@ -274,11 +274,10 @@ void gpgpu_shader_exec(struct intel_bb *ibb, > struct gpgpu_shader *gpgpu_shader_create(int fd) > { > struct gpgpu_shader *shdr = calloc(1, sizeof(struct gpgpu_shader)); > - const struct intel_device_info *info; > + unsigned ip_ver = intel_graphics_ver(fd); > > igt_assert(shdr); > - info = intel_get_device_info(intel_get_drm_devid(fd)); > - shdr->gen_ver = 100 * info->graphics_ver + info->graphics_rel; > + shdr->gen_ver = 100 * (ip_ver >> 8) + (ip_ver & 0xff); > shdr->max_size = 16 * 4; > shdr->code = malloc(4 * shdr->max_size); > shdr->labels = igt_map_create(igt_map_hash_32, igt_map_equal_32); > diff --git a/lib/gpu_cmds.c b/lib/gpu_cmds.c > index ab46fe0de..6842af1ad 100644 > --- a/lib/gpu_cmds.c > +++ b/lib/gpu_cmds.c > @@ -313,14 +313,13 @@ fill_binding_table(struct intel_bb *ibb, struct intel_buf *buf) > { > uint32_t binding_table_offset; > uint32_t *binding_table; > - uint32_t devid = intel_get_drm_devid(ibb->fd); > > intel_bb_ptr_align(ibb, 64); > binding_table_offset = intel_bb_offset(ibb); > binding_table = intel_bb_ptr(ibb); > intel_bb_ptr_add(ibb, 64); > > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { > + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) { > /* > * Up until now, SURFACEFORMAT_R8_UNROM was used regardless of the 'bpp' value. > * For bpp 32 this results in a surface that is 4x narrower than expected. However > @@ -342,13 +341,13 @@ fill_binding_table(struct intel_bb *ibb, struct intel_buf *buf) > igt_assert_f(false, > "Surface state for bpp = %u not implemented", > buf->bpp); > - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 50)) { > + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(12, 50)) { > binding_table[0] = xehp_fill_surface_state(ibb, buf, > SURFACEFORMAT_R8_UNORM, 1); > - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(9, 0)) { > + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(9, 0)) { > binding_table[0] = gen9_fill_surface_state(ibb, buf, > SURFACEFORMAT_R8_UNORM, 1); > - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(8, 0)) { > + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(8, 0)) { > binding_table[0] = gen8_fill_surface_state(ibb, buf, > SURFACEFORMAT_R8_UNORM, 1); > } else { > @@ -867,7 +866,7 @@ gen_emit_media_object(struct intel_bb *ibb, > /* inline data (xoffset, yoffset) */ > intel_bb_out(ibb, xoffset); > intel_bb_out(ibb, yoffset); > - if (intel_gen_legacy(ibb->devid) >= 8 && !IS_CHERRYVIEW(ibb->devid)) > + if (intel_gen(ibb->fd) >= 8 && !IS_CHERRYVIEW(ibb->devid)) > gen8_emit_media_state_flush(ibb); > } > > @@ -1011,7 +1010,7 @@ void > xehp_emit_state_compute_mode(struct intel_bb *ibb, bool vrt) > { > > - uint32_t dword_length = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0); > + uint32_t dword_length = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0); > > intel_bb_out(ibb, XEHP_STATE_COMPUTE_MODE | dword_length); > intel_bb_out(ibb, vrt ? (0x10001) << 10 : 0); /* Enable variable number of threads */ > @@ -1042,7 +1041,7 @@ xehp_emit_state_base_address(struct intel_bb *ibb) > intel_bb_out(ibb, 0); > > /* stateless data port */ > - tmp = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0) ? 0 : BASE_ADDRESS_MODIFY; > + tmp = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0) ? 0 : BASE_ADDRESS_MODIFY; > intel_bb_out(ibb, 0 | tmp); //dw3 > > /* surface */ > @@ -1068,7 +1067,7 @@ xehp_emit_state_base_address(struct intel_bb *ibb) > /* dynamic state buffer size */ > intel_bb_out(ibb, ALIGN(ibb->size, 1 << 12) | 1); //dw13 > /* indirect object buffer size */ > - if (intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0)) //dw14 > + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) //dw14 > intel_bb_out(ibb, 0); > else > intel_bb_out(ibb, 0xfffff000 | 1); > @@ -1115,7 +1114,7 @@ xehp_emit_compute_walk(struct intel_bb *ibb, > else > mask = (1 << mask) - 1; > > - dword_length = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0) ? 0x26 : 0x25; > + dword_length = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0) ? 0x26 : 0x25; > intel_bb_out(ibb, XEHP_COMPUTE_WALKER | dword_length); > > intel_bb_out(ibb, 0); /* debug object */ //dw1 > @@ -1155,7 +1154,7 @@ xehp_emit_compute_walk(struct intel_bb *ibb, > intel_bb_out(ibb, 0); //dw16 > intel_bb_out(ibb, 0); //dw17 > > - if (intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0)) //Xe2:dw18 > + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) //Xe2:dw18 > intel_bb_out(ibb, 0); > /* Interface descriptor data */ > for (int i = 0; i < 8; i++) { //dw18-25 (Xe2:dw19-26) > diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c > index f418e7981..4f52e7b6a 100644 > --- a/lib/intel_batchbuffer.c > +++ b/lib/intel_batchbuffer.c > @@ -329,11 +329,7 @@ void igt_blitter_copy(int fd, > uint32_t dst_x, uint32_t dst_y, > uint64_t dst_size) > { > - uint32_t devid; > - > - devid = intel_get_drm_devid(fd); > - > - if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 60)) > + if (intel_graphics_ver(fd) >= IP_VER(12, 60)) > igt_blitter_fast_copy__raw(fd, ahnd, ctx, NULL, > src_handle, src_delta, > src_stride, src_tiling, > @@ -410,7 +406,7 @@ void igt_blitter_src_copy(int fd, > uint32_t batch_handle; > uint32_t src_pitch, dst_pitch; > uint32_t dst_reloc_offset, src_reloc_offset; > - uint32_t gen = intel_gen_legacy(intel_get_drm_devid(fd)); > + uint32_t gen = intel_gen(fd); > uint64_t batch_offset, src_offset, dst_offset; > const bool has_64b_reloc = gen >= 8; > int i = 0; > @@ -669,7 +665,7 @@ igt_render_copyfunc_t igt_get_render_copyfunc(int fd) > copy = mtl_render_copyfunc; > else if (IS_DG2(devid)) > copy = gen12p71_render_copyfunc; > - else if (intel_gen_legacy(devid) >= 20) > + else if (intel_gen(fd) >= 20) > copy = xe2_render_copyfunc; > else if (IS_GEN12(devid)) > copy = gen12_render_copyfunc; > @@ -911,7 +907,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg, > igt_assert(ibb); > > ibb->devid = intel_get_drm_devid(fd); > - ibb->gen = intel_gen_legacy(ibb->devid); > + ibb->gen = intel_gen(fd); > ibb->ctx = ctx; > > ibb->fd = fd; > @@ -1089,7 +1085,7 @@ struct intel_bb *intel_bb_create_with_allocator(int fd, uint32_t ctx, uint32_t v > > static bool aux_needs_softpin(int fd) > { > - return intel_gen_legacy(intel_get_drm_devid(fd)) >= 12; > + return intel_gen(fd) >= 12; > } > > static bool has_ctx_cfg(struct intel_bb *ibb) > diff --git a/lib/intel_blt.c b/lib/intel_blt.c > index 673f204b0..7ae04fccd 100644 > --- a/lib/intel_blt.c > +++ b/lib/intel_blt.c > @@ -997,7 +997,7 @@ uint64_t emit_blt_block_copy(int fd, > uint64_t bb_pos, > bool emit_bbe) > { > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > + unsigned int ip_ver = intel_graphics_ver(fd); > struct gen12_block_copy_data data = {}; > struct gen12_block_copy_data_ext dext = {}; > uint64_t dst_offset, src_offset, bb_offset; > @@ -1285,7 +1285,7 @@ uint64_t emit_blt_ctrl_surf_copy(int fd, > uint64_t bb_pos, > bool emit_bbe) > { > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > + unsigned int ip_ver = intel_graphics_ver(fd); > union ctrl_surf_copy_data data = { }; > size_t data_sz; > uint64_t dst_offset, src_offset, bb_offset, alignment; > @@ -1705,7 +1705,7 @@ uint64_t emit_blt_fast_copy(int fd, > uint64_t bb_pos, > bool emit_bbe) > { > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > + unsigned int ip_ver = intel_graphics_ver(fd); > struct gen12_fast_copy_data data = {}; > uint64_t dst_offset, src_offset, bb_offset; > uint32_t bbe = MI_BATCH_BUFFER_END; > @@ -1972,11 +1972,10 @@ void blt_mem_copy_init(int fd, struct blt_mem_copy_data *mem, > static void dump_bb_mem_copy_cmd(int fd, struct xe_mem_copy_data *data) > { > uint32_t *cmd = (uint32_t *) data; > - uint32_t devid = intel_get_drm_devid(fd); > > igt_info("BB details:\n"); > > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { > igt_info(" dw00: [%08x] <client: 0x%x, opcode: 0x%x, length: %d> " > "[copy type: %d, mode: %d]\n", > cmd[0], data->dw00.xe2.client, data->dw00.xe2.opcode, > @@ -2006,7 +2005,7 @@ static void dump_bb_mem_copy_cmd(int fd, struct xe_mem_copy_data *data) > cmd[7], data->dw07.dst_address_lo); > igt_info(" dw08: [%08x] dst offset hi (0x%x)\n", > cmd[8], data->dw08.dst_address_hi); > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { > igt_info(" dw09: [%08x] mocs <dst: 0x%x, src: 0x%x>\n", > cmd[9], data->dw09.xe2.dst_mocs, > data->dw09.xe2.src_mocs); > @@ -2025,7 +2024,6 @@ static uint64_t emit_blt_mem_copy(int fd, uint64_t ahnd, > uint64_t dst_offset, src_offset, shift; > uint32_t width, height, width_max, height_max, remain; > uint32_t bbe = MI_BATCH_BUFFER_END; > - uint32_t devid = intel_get_drm_devid(fd); > uint8_t *bb; > > if (mem->mode == MODE_BYTE) { > @@ -2049,7 +2047,7 @@ static uint64_t emit_blt_mem_copy(int fd, uint64_t ahnd, > width = mem->src.width; > height = mem->dst.height; > > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { > data.dw00.xe2.client = 0x2; > data.dw00.xe2.opcode = 0x5a; > data.dw00.xe2.length = 8; > @@ -2231,7 +2229,6 @@ static void emit_blt_mem_set(int fd, uint64_t ahnd, > int b; > uint32_t *batch; > uint32_t value; > - uint32_t devid = intel_get_drm_devid(fd); > > dst_offset = get_offset_pat_index(ahnd, mem->dst.handle, mem->dst.size, > 0, mem->dst.pat_index); > @@ -2246,7 +2243,7 @@ static void emit_blt_mem_set(int fd, uint64_t ahnd, > batch[b++] = mem->dst.pitch - 1; > batch[b++] = dst_offset; > batch[b++] = dst_offset << 32; > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) > batch[b++] = value | (mem->dst.mocs_index << 3); > else > batch[b++] = value | mem->dst.mocs_index; > @@ -2364,7 +2361,7 @@ blt_create_object(const struct blt_copy_data *blt, uint32_t region, > if (create_mapping && region != system_memory(blt->fd)) > flags |= DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM; > > - if (intel_gen_legacy(intel_get_drm_devid(blt->fd)) >= 20 && compression) { > + if (intel_gen(blt->fd) >= 20 && compression) { > pat_index = intel_get_pat_idx_uc_comp(blt->fd); > cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; > } > @@ -2590,7 +2587,7 @@ void blt_surface_get_flatccs_data(int fd, > cpu_caching = __xe_default_cpu_caching(fd, sysmem, 0); > ccs_bo_size = ALIGN(ccssize, xe_get_default_alignment(fd)); > > - if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 20 && obj->compression) { > + if (intel_gen(fd) >= 20 && obj->compression) { > comp_pat_index = intel_get_pat_idx_uc_comp(fd); > cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; > } > diff --git a/lib/intel_blt.h b/lib/intel_blt.h > index a98a34e95..feba94ebb 100644 > --- a/lib/intel_blt.h > +++ b/lib/intel_blt.h > @@ -52,7 +52,7 @@ > #include "igt.h" > #include "intel_cmds_info.h" > > -#define CCS_RATIO(fd) (intel_gen_legacy(intel_get_drm_devid(fd)) >= 20 ? 512 : 256) > +#define CCS_RATIO(fd) (intel_gen(fd) >= 20 ? 512 : 256) > #define GEN12_MEM_COPY_MOCS_SHIFT 25 > #define XE2_MEM_COPY_SRC_MOCS_SHIFT 28 > #define XE2_MEM_COPY_DST_MOCS_SHIFT 3 > diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c > index ea3742f1e..a2adbf9ef 100644 > --- a/lib/intel_bufops.c > +++ b/lib/intel_bufops.c > @@ -1063,7 +1063,7 @@ static void __intel_buf_init(struct buf_ops *bops, > } else { > uint16_t cpu_caching = __xe_default_cpu_caching(bops->fd, region, 0); > > - if (intel_gen_legacy(bops->devid) >= 20 && compression) > + if (intel_gen(bops->fd) >= 20 && compression) > cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; > > bo_size = ALIGN(bo_size, xe_get_default_alignment(bops->fd)); > @@ -1106,7 +1106,7 @@ void intel_buf_init(struct buf_ops *bops, > uint64_t region; > uint8_t pat_index = DEFAULT_PAT_INDEX; > > - if (compression && intel_gen_legacy(bops->devid) >= 20) > + if (compression && intel_gen(bops->fd) >= 20) > pat_index = intel_get_pat_idx_uc_comp(bops->fd); > > region = bops->driver == INTEL_DRIVER_I915 ? I915_SYSTEM_MEMORY : > @@ -1132,7 +1132,7 @@ void intel_buf_init_in_region(struct buf_ops *bops, > { > uint8_t pat_index = DEFAULT_PAT_INDEX; > > - if (compression && intel_gen_legacy(bops->devid) >= 20) > + if (compression && intel_gen(bops->fd) >= 20) > pat_index = intel_get_pat_idx_uc_comp(bops->fd); > > __intel_buf_init(bops, 0, buf, width, height, bpp, alignment, > @@ -1203,7 +1203,7 @@ void intel_buf_init_using_handle_and_size(struct buf_ops *bops, > igt_assert(handle); > igt_assert(size); > > - if (compression && intel_gen_legacy(bops->devid) >= 20) > + if (compression && intel_gen(bops->fd) >= 20) > pat_index = intel_get_pat_idx_uc_comp(bops->fd); > > __intel_buf_init(bops, handle, buf, width, height, bpp, alignment, > @@ -1758,7 +1758,7 @@ static struct buf_ops *__buf_ops_create(int fd, bool check_idempotency) > igt_assert(bops); > > devid = intel_get_drm_devid(fd); > - generation = intel_gen_legacy(devid); > + generation = intel_gen(fd); > > /* Predefined settings: see intel_device_info? */ > for (int i = 0; i < ARRAY_SIZE(buf_ops_arr); i++) { > diff --git a/lib/intel_common.c b/lib/intel_common.c > index cd1019bfe..407d53f77 100644 > --- a/lib/intel_common.c > +++ b/lib/intel_common.c > @@ -91,7 +91,7 @@ bool is_intel_region_compressible(int fd, uint64_t region) > return true; > > /* Integrated Xe2+ supports compression on system memory */ > - if (intel_gen_legacy(devid) >= 20 && !is_dgfx && is_intel_system_region(fd, region)) > + if (intel_gen(fd) >= 20 && !is_dgfx && is_intel_system_region(fd, region)) > return true; > > /* Discrete supports compression on vram */ > diff --git a/lib/intel_compute.c b/lib/intel_compute.c > index 1734c1649..66156d194 100644 > --- a/lib/intel_compute.c > +++ b/lib/intel_compute.c > @@ -2284,7 +2284,7 @@ static bool __run_intel_compute_kernel(int fd, > struct user_execenv *user, > enum execenv_alloc_prefs alloc_prefs) > { > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > + unsigned int ip_ver = intel_graphics_ver(fd); > int batch; > const struct intel_compute_kernels *kernel_entries = intel_compute_square_kernels, *kernels; > enum intel_driver driver = get_intel_driver(fd); > @@ -2749,7 +2749,7 @@ static bool __run_intel_compute_kernel_preempt(int fd, > bool threadgroup_preemption, > enum execenv_alloc_prefs alloc_prefs) > { > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > + unsigned int ip_ver = ip_ver = intel_graphics_ver(fd); > int batch; > const struct intel_compute_kernels *kernel_entries = intel_compute_square_kernels, *kernels; > enum intel_driver driver = get_intel_driver(fd); > @@ -2803,7 +2803,7 @@ static bool __run_intel_compute_kernel_preempt(int fd, > */ > bool xe_kernel_preempt_check(int fd, enum xe_compute_preempt_type required_preempt) > { > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > + unsigned int ip_ver = ip_ver = intel_graphics_ver(fd); > int batch = find_preempt_batch(ip_ver); > > if (batch < 0) { > diff --git a/lib/intel_mocs.c b/lib/intel_mocs.c > index f21c2bf09..b9ea43c7c 100644 > --- a/lib/intel_mocs.c > +++ b/lib/intel_mocs.c > @@ -27,8 +27,8 @@ struct drm_intel_mocs_index { > > static void get_mocs_index(int fd, struct drm_intel_mocs_index *mocs) > { > - uint16_t devid = intel_get_drm_devid(fd); > - unsigned int ip_ver = intel_graphics_ver_legacy(devid); > + uint16_t devid; > + unsigned int ip_ver = intel_graphics_ver(fd); > > /* > * Gen >= 12 onwards don't have a setting for PTE, > @@ -42,26 +42,29 @@ static void get_mocs_index(int fd, struct drm_intel_mocs_index *mocs) > mocs->wb_index = 4; > mocs->displayable_index = 1; > mocs->defer_to_pat_index = 0; > - } else if (IS_METEORLAKE(devid)) { > - mocs->uc_index = 5; > - mocs->wb_index = 1; > - mocs->displayable_index = 14; > - } else if (IS_DG2(devid)) { > - mocs->uc_index = 1; > - mocs->wb_index = 3; > - mocs->displayable_index = 3; > - } else if (IS_DG1(devid)) { > - mocs->uc_index = 1; > - mocs->wb_index = 5; > - mocs->displayable_index = 5; > - } else if (ip_ver >= IP_VER(12, 0)) { > - mocs->uc_index = 3; > - mocs->wb_index = 2; > - mocs->displayable_index = 61; > } else { > - mocs->uc_index = I915_MOCS_PTE; > - mocs->wb_index = I915_MOCS_CACHED; > - mocs->displayable_index = I915_MOCS_PTE; > + devid = intel_get_drm_devid(fd); > + if (IS_METEORLAKE(devid)) { > + mocs->uc_index = 5; > + mocs->wb_index = 1; > + mocs->displayable_index = 14; > + } else if (IS_DG2(devid)) { > + mocs->uc_index = 1; > + mocs->wb_index = 3; > + mocs->displayable_index = 3; > + } else if (IS_DG1(devid)) { > + mocs->uc_index = 1; > + mocs->wb_index = 5; > + mocs->displayable_index = 5; > + } else if (ip_ver >= IP_VER(12, 0)) { > + mocs->uc_index = 3; > + mocs->wb_index = 2; > + mocs->displayable_index = 61; > + } else { > + mocs->uc_index = I915_MOCS_PTE; > + mocs->wb_index = I915_MOCS_CACHED; > + mocs->displayable_index = I915_MOCS_PTE; > + } > } > } > > @@ -124,9 +127,8 @@ uint8_t intel_get_displayable_mocs_index(int fd) > uint8_t intel_get_defer_to_pat_mocs_index(int fd) > { > struct drm_intel_mocs_index mocs; > - uint16_t dev_id = intel_get_drm_devid(fd); > > - igt_assert(intel_gen_legacy(dev_id) >= 20); > + igt_assert(intel_gen(fd) >= 20); > > get_mocs_index(fd, &mocs); > > diff --git a/lib/intel_pat.c b/lib/intel_pat.c > index 9a61c2a45..9bb4800b6 100644 > --- a/lib/intel_pat.c > +++ b/lib/intel_pat.c > @@ -96,14 +96,12 @@ int32_t xe_get_pat_sw_config(int drm_fd, struct intel_pat_cache *xe_pat_cache) > > static void intel_get_pat_idx(int fd, struct intel_pat_cache *pat) > { > - uint16_t dev_id = intel_get_drm_devid(fd); > - > - if (intel_graphics_ver_legacy(dev_id) == IP_VER(35, 11)) { > + if (intel_graphics_ver(fd) == IP_VER(35, 11)) { > pat->uc = 3; > pat->wb = 2; > pat->max_index = 31; > - } else if (intel_get_device_info(dev_id)->graphics_ver == 30 || > - intel_get_device_info(dev_id)->graphics_ver == 20) { > + } else if (intel_gen(fd) == 30 || > + intel_gen(fd) == 20) { > pat->uc = 3; > pat->wt = 15; /* Compressed + WB-transient */ > pat->wb = 2; > @@ -111,19 +109,19 @@ static void intel_get_pat_idx(int fd, struct intel_pat_cache *pat) > pat->max_index = 31; > > /* Wa_16023588340: CLOS3 entries at end of table are unusable */ > - if (intel_graphics_ver_legacy(dev_id) == IP_VER(20, 1)) > + if (intel_graphics_ver(fd) == IP_VER(20, 1)) > pat->max_index -= 4; > - } else if (IS_METEORLAKE(dev_id)) { > + } else if (IS_METEORLAKE(intel_get_drm_devid(fd))) { > pat->uc = 2; > pat->wt = 1; > pat->wb = 3; > pat->max_index = 3; > - } else if (IS_PONTEVECCHIO(dev_id)) { > + } else if (IS_PONTEVECCHIO(intel_get_drm_devid(fd))) { > pat->uc = 0; > pat->wt = 2; > pat->wb = 3; > pat->max_index = 7; > - } else if (intel_graphics_ver_legacy(dev_id) <= IP_VER(12, 60)) { > + } else if (intel_graphics_ver(fd) <= IP_VER(12, 60)) { > pat->uc = 3; > pat->wt = 2; > pat->wb = 0; > @@ -152,9 +150,8 @@ uint8_t intel_get_pat_idx_uc(int fd) > uint8_t intel_get_pat_idx_uc_comp(int fd) > { > struct intel_pat_cache pat = {}; > - uint16_t dev_id = intel_get_drm_devid(fd); > > - igt_assert(intel_gen_legacy(dev_id) >= 20); > + igt_assert(intel_gen(fd) >= 20); > > intel_get_pat_idx(fd, &pat); > return pat.uc_comp; > diff --git a/lib/rendercopy_gen9.c b/lib/rendercopy_gen9.c > index 66415212c..0be557a47 100644 > --- a/lib/rendercopy_gen9.c > +++ b/lib/rendercopy_gen9.c > @@ -256,12 +256,12 @@ gen9_bind_buf(struct intel_bb *ibb, const struct intel_buf *buf, int is_dst, > if (buf->compression == I915_COMPRESSION_MEDIA) > ss->ss7.tgl.media_compression = 1; > else if (buf->compression == I915_COMPRESSION_RENDER) { > - if (intel_gen_legacy(ibb->devid) >= 20) > + if (intel_gen(ibb->fd) >= 20) > ss->ss6.aux_mode = 0x0; /* AUX_NONE, unified compression */ > else > ss->ss6.aux_mode = 0x5; /* AUX_CCS_E */ > > - if (intel_gen_legacy(ibb->devid) < 12 && buf->ccs[0].stride) { > + if (intel_gen(ibb->fd) < 12 && buf->ccs[0].stride) { > ss->ss6.aux_pitch = (buf->ccs[0].stride / 128) - 1; > > address = intel_bb_offset_reloc_with_delta(ibb, buf->handle, > @@ -303,7 +303,7 @@ gen9_bind_buf(struct intel_bb *ibb, const struct intel_buf *buf, int is_dst, > ss->ss7.dg2.disable_support_for_multi_gpu_partial_writes = 1; > ss->ss7.dg2.disable_support_for_multi_gpu_atomics = 1; > > - if (intel_gen_legacy(ibb->devid) >= 20) > + if (intel_gen(ibb->fd) >= 20) > ss->ss12.lnl.compression_format = lnl_compression_format(buf); > else > ss->ss12.dg2.compression_format = dg2_compression_format(buf); > @@ -681,7 +681,7 @@ gen9_emit_state_base_address(struct intel_bb *ibb) { > /* WaBindlessSurfaceStateModifyEnable:skl,bxt */ > /* The length has to be one less if we dont modify > bindless state */ > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) > + if (intel_gen(ibb->fd) >= 20) > intel_bb_out(ibb, GEN4_STATE_BASE_ADDRESS | 20); > else > intel_bb_out(ibb, GEN4_STATE_BASE_ADDRESS | (19 - 1 - 2)); > @@ -726,7 +726,7 @@ gen9_emit_state_base_address(struct intel_bb *ibb) { > intel_bb_out(ibb, 0); > intel_bb_out(ibb, 0); > > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) { > + if (intel_gen(ibb->fd) >= 20) { > /* Bindless sampler */ > intel_bb_out(ibb, 0); > intel_bb_out(ibb, 0); > @@ -899,7 +899,7 @@ gen9_emit_ds(struct intel_bb *ibb) { > > static void > gen8_emit_wm_hz_op(struct intel_bb *ibb) { > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) { > + if (intel_gen(ibb->fd) >= 20) { > intel_bb_out(ibb, GEN8_3DSTATE_WM_HZ_OP | (6-2)); > intel_bb_out(ibb, 0); > } else { > @@ -989,7 +989,7 @@ gen8_emit_ps(struct intel_bb *ibb, uint32_t kernel, bool fast_clear) { > intel_bb_out(ibb, 0); > > intel_bb_out(ibb, GEN7_3DSTATE_PS | (12-2)); > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) > + if (intel_gen(ibb->fd) >= 20) > intel_bb_out(ibb, kernel | 1); > else > intel_bb_out(ibb, kernel); > @@ -1006,7 +1006,7 @@ gen8_emit_ps(struct intel_bb *ibb, uint32_t kernel, bool fast_clear) { > intel_bb_out(ibb, (max_threads - 1) << GEN8_3DSTATE_PS_MAX_THREADS_SHIFT | > GEN6_3DSTATE_WM_16_DISPATCH_ENABLE | > (fast_clear ? GEN8_3DSTATE_FAST_CLEAR_ENABLE : 0)); > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) > + if (intel_gen(ibb->fd) >= 20) > intel_bb_out(ibb, 6 << GEN6_3DSTATE_WM_DISPATCH_START_GRF_0_SHIFT | > GENXE_KERNEL0_POLY_PACK16_FIXED << GENXE_KERNEL0_PACKING_POLICY); > else > @@ -1061,7 +1061,7 @@ gen9_emit_depth(struct intel_bb *ibb) > > static void > gen7_emit_clear(struct intel_bb *ibb) { > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) > + if (intel_gen(ibb->fd) >= 20) > return; > > intel_bb_out(ibb, GEN7_3DSTATE_CLEAR_PARAMS | (3-2)); > @@ -1072,7 +1072,7 @@ gen7_emit_clear(struct intel_bb *ibb) { > static void > gen6_emit_drawing_rectangle(struct intel_bb *ibb, const struct intel_buf *dst) > { > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) > + if (intel_gen(ibb->fd) >= 20) > intel_bb_out(ibb, GENXE2_3DSTATE_DRAWING_RECTANGLE_FAST | (4 - 2)); > else > intel_bb_out(ibb, GEN4_3DSTATE_DRAWING_RECTANGLE | (4 - 2)); > @@ -1266,7 +1266,7 @@ void _gen9_render_op(struct intel_bb *ibb, > > gen9_emit_state_base_address(ibb); > > - if (HAS_4TILE(ibb->devid) || intel_gen_legacy(ibb->devid) > 12) { > + if (HAS_4TILE(ibb->devid) || intel_gen(ibb->fd) > 12) { > intel_bb_out(ibb, GEN4_3DSTATE_BINDING_TABLE_POOL_ALLOC | 2); > intel_bb_emit_reloc(ibb, ibb->handle, > I915_GEM_DOMAIN_RENDER | I915_GEM_DOMAIN_INSTRUCTION, 0, > diff --git a/lib/xe/xe_legacy.c b/lib/xe/xe_legacy.c > index 1529ed1cc..c1ce9fa00 100644 > --- a/lib/xe/xe_legacy.c > +++ b/lib/xe/xe_legacy.c > @@ -75,7 +75,7 @@ xe_legacy_test_mode(int fd, struct drm_xe_engine_class_instance *eci, > igt_assert_lte(n_exec_queues, MAX_N_EXECQUEUES); > > if (flags & COMPRESSION) > - igt_require(intel_gen_legacy(intel_get_drm_devid(fd)) >= 20); > + igt_require(intel_gen(fd) >= 20); > > if (flags & CLOSE_FD) > fd = drm_open_driver(DRIVER_XE); > diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c > index 36260e3e5..8ca137381 100644 > --- a/lib/xe/xe_spin.c > +++ b/lib/xe/xe_spin.c > @@ -54,7 +54,6 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts) > uint64_t pad_addr = opts->addr + offsetof(struct xe_spin, pad); > uint64_t timestamp_addr = opts->addr + offsetof(struct xe_spin, timestamp); > int b = 0; > - uint32_t devid; > > spin->start = 0; > spin->end = 0xffffffff; > @@ -166,8 +165,7 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts) > spin->batch[b++] = opts->mem_copy->dst_offset; > spin->batch[b++] = opts->mem_copy->dst_offset << 32; > > - devid = intel_get_drm_devid(opts->mem_copy->fd); > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) > + if (intel_graphics_ver(opts->mem_copy->fd) >= IP_VER(20, 0)) > spin->batch[b++] = opts->mem_copy->src->mocs_index << XE2_MEM_COPY_SRC_MOCS_SHIFT | > opts->mem_copy->dst->mocs_index << XE2_MEM_COPY_DST_MOCS_SHIFT; > else > diff --git a/lib/xe/xe_sriov_provisioning.c b/lib/xe/xe_sriov_provisioning.c > index 7b60ccd6c..3d981766c 100644 > --- a/lib/xe/xe_sriov_provisioning.c > +++ b/lib/xe/xe_sriov_provisioning.c > @@ -50,9 +50,7 @@ const char *xe_sriov_shared_res_to_string(enum xe_sriov_shared_res res) > > static uint64_t get_vfid_mask(int fd) > { > - uint16_t dev_id = intel_get_drm_devid(fd); > - > - return (intel_graphics_ver_legacy(dev_id) >= IP_VER(12, 50)) ? > + return (intel_graphics_ver(fd) >= IP_VER(12, 50)) ? > GGTT_PTE_VFID_MASK : PRE_1250_IP_VER_GGTT_PTE_VFID_MASK; > } > > diff --git a/tests/intel/api_intel_allocator.c b/tests/intel/api_intel_allocator.c > index 869e5e9a0..6b1d17da7 100644 > --- a/tests/intel/api_intel_allocator.c > +++ b/tests/intel/api_intel_allocator.c > @@ -625,7 +625,7 @@ static void execbuf_with_allocator(int fd) > uint64_t ahnd, sz = 4096, gtt_size; > unsigned int flags = EXEC_OBJECT_PINNED; > uint32_t *ptr, batch[32], copied; > - int gen = intel_gen_legacy(intel_get_drm_devid(fd)); > + int gen = intel_gen(fd); > int i; > const uint32_t magic = 0x900df00d; > > diff --git a/tests/intel/kms_ccs.c b/tests/intel/kms_ccs.c > index 30f2c9465..a0373316a 100644 > --- a/tests/intel/kms_ccs.c > +++ b/tests/intel/kms_ccs.c > @@ -565,7 +565,7 @@ static void access_flat_ccs_surface(struct igt_fb *fb, bool verify_compression) > uint16_t cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; > uint8_t uc_mocs = intel_get_uc_mocs_index(fb->fd); > uint8_t comp_pat_index = intel_get_pat_idx_wt(fb->fd); > - uint32_t region = (intel_gen_legacy(intel_get_drm_devid(fb->fd)) >= 20 && > + uint32_t region = (intel_gen(fb->fd) >= 20 && > xe_has_vram(fb->fd)) ? REGION_LMEM(0) : REGION_SMEM; > > struct drm_xe_engine_class_instance inst = { > @@ -645,7 +645,7 @@ static void fill_fb_random(int drm_fd, igt_fb_t *fb) > igt_assert_eq(0, gem_munmap(map, fb->size)); > > /* randomize also ccs surface on Xe2 */ > - if (intel_gen_legacy(intel_get_drm_devid(drm_fd)) >= 20) > + if (intel_gen(drm_fd) >= 20) > access_flat_ccs_surface(fb, false); > } > > @@ -1125,11 +1125,6 @@ static bool valid_modifier_test(u64 modifier, const enum test_flags flags) > > static void test_output(data_t *data, const int testnum) > { > - uint16_t dev_id; > - > - igt_fixture() > - dev_id = intel_get_drm_devid(data->drm_fd); > - > data->flags = tests[testnum].flags; > > for (int i = 0; i < ARRAY_SIZE(ccs_modifiers); i++) { > @@ -1143,10 +1138,10 @@ static void test_output(data_t *data, const int testnum) > igt_subtest_with_dynamic_f("%s-%s", tests[testnum].testname, ccs_modifiers[i].str) { > if (ccs_modifiers[i].modifier == I915_FORMAT_MOD_4_TILED_BMG_CCS || > ccs_modifiers[i].modifier == I915_FORMAT_MOD_4_TILED_LNL_CCS) { > - igt_require_f(intel_gen_legacy(dev_id) >= 20, > + igt_require_f(intel_gen(data->drm_fd) >= 20, > "Xe2 platform needed.\n"); > } else { > - igt_require_f(intel_gen_legacy(dev_id) < 20, > + igt_require_f(intel_gen(data->drm_fd) < 20, > "Older than Xe2 platform needed.\n"); > } > > diff --git a/tests/intel/kms_fbcon_fbt.c b/tests/intel/kms_fbcon_fbt.c > index edf5c0d1b..b28961417 100644 > --- a/tests/intel/kms_fbcon_fbt.c > +++ b/tests/intel/kms_fbcon_fbt.c > @@ -179,7 +179,7 @@ static bool fbc_wait_until_update(struct drm_info *drm) > * For older GENs FBC is still expected to be disabled as it still > * relies on a tiled and fenceable framebuffer to track modifications. > */ > - if (intel_gen_legacy(intel_get_drm_devid(drm->fd)) >= 9) { > + if (intel_gen(drm->fd) >= 9) { > if (!fbc_wait_until_enabled(drm->debugfs_fd)) > return false; > /* > diff --git a/tests/intel/kms_frontbuffer_tracking.c b/tests/intel/kms_frontbuffer_tracking.c > index c8c2ce240..5b60587db 100644 > --- a/tests/intel/kms_frontbuffer_tracking.c > +++ b/tests/intel/kms_frontbuffer_tracking.c > @@ -3062,13 +3062,13 @@ static bool tiling_is_valid(int feature_flags, enum tiling_type tiling) > > switch (tiling) { > case TILING_LINEAR: > - return intel_gen_legacy(drm.devid) >= 9; > + return intel_gen(drm.fd) >= 9; > case TILING_X: > return (intel_get_device_info(drm.devid)->display_ver > 29) ? false : true; > case TILING_Y: > return true; > case TILING_4: > - return intel_gen_legacy(drm.devid) >= 12; > + return intel_gen(drm.fd) >= 12; > default: > igt_assert(false); > return false; > @@ -4475,7 +4475,7 @@ int igt_main_args("", long_options, help_str, opt_handler, NULL) > igt_require(igt_draw_supports_method(drm.fd, t.method)); > > if (t.tiling == TILING_Y) { > - igt_require(intel_gen_legacy(drm.devid) >= 9); > + igt_require(intel_gen(drm.fd) >= 9); > igt_require(!intel_get_device_info(drm.devid)->has_4tile); > } > > diff --git a/tests/intel/kms_pipe_stress.c b/tests/intel/kms_pipe_stress.c > index 1ae32d5fd..f8c994d07 100644 > --- a/tests/intel/kms_pipe_stress.c > +++ b/tests/intel/kms_pipe_stress.c > @@ -822,7 +822,7 @@ static void prepare_test(struct data *data) > > create_framebuffers(data); > > - if (intel_gen_legacy(intel_get_drm_devid(data->drm_fd)) > 9) > + if (intel_gen(data->drm_fd) > 9) > start_gpu_threads(data); > } > > @@ -830,7 +830,7 @@ static void finish_test(struct data *data) > { > int i; > > - if (intel_gen_legacy(intel_get_drm_devid(data->drm_fd)) > 9) > + if (intel_gen(data->drm_fd) > 9) > stop_gpu_threads(data); > > /* > diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c > index 914144270..0ba8ae48c 100644 > --- a/tests/intel/xe_ccs.c > +++ b/tests/intel/xe_ccs.c > @@ -128,7 +128,7 @@ static void surf_copy(int xe, > int result; > > igt_assert(mid->compression); > - if (intel_gen_legacy(devid) >= 20 && mid->compression) { > + if (intel_gen(xe) >= 20 && mid->compression) { > comp_pat_index = intel_get_pat_idx_uc_comp(xe); > cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; > } > @@ -177,7 +177,7 @@ static void surf_copy(int xe, > if (IS_GEN(devid, 12) && is_intel_dgfx(xe)) { > igt_assert(!strcmp(orig, newsum)); > igt_assert(!strcmp(orig2, newsum2)); > - } else if (intel_gen_legacy(devid) >= 20) { > + } else if (intel_gen(xe) >= 20) { > if (is_intel_dgfx(xe)) { > /* buffer object would become > * uncompressed in xe2+ dgfx > @@ -227,7 +227,7 @@ static void surf_copy(int xe, > * uncompressed in xe2+ dgfx, and therefore retrieve the > * ccs by copying 0 to ccsmap > */ > - if (suspend_resume && intel_gen_legacy(devid) >= 20 && is_intel_dgfx(xe)) > + if (suspend_resume && intel_gen(xe) >= 20 && is_intel_dgfx(xe)) > memset(ccsmap, 0, ccssize); > else > /* retrieve back ccs */ > @@ -353,7 +353,7 @@ static void block_copy(int xe, > uint64_t bb_size = xe_bb_size(xe, SZ_4K); > uint64_t ahnd = intel_allocator_open(xe, ctx->vm, INTEL_ALLOCATOR_RELOC); > uint32_t run_id = mid_tiling; > - uint32_t mid_region = (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && > + uint32_t mid_region = (intel_gen(xe) >= 20 && > !xe_has_vram(xe)) ? region1 : region2; > uint32_t bb; > enum blt_compression mid_compression = config->compression; > @@ -441,7 +441,7 @@ static void block_copy(int xe, > if (config->inplace) { > uint8_t pat_index = DEFAULT_PAT_INDEX; > > - if (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && config->compression) > + if (intel_gen(xe) >= 20 && config->compression) > pat_index = intel_get_pat_idx_uc_comp(xe); > > blt_set_object(&blt.dst, mid->handle, dst->size, mid->region, 0, > @@ -488,7 +488,7 @@ static void block_multicopy(int xe, > uint64_t bb_size = xe_bb_size(xe, SZ_4K); > uint64_t ahnd = intel_allocator_open(xe, ctx->vm, INTEL_ALLOCATOR_RELOC); > uint32_t run_id = mid_tiling; > - uint32_t mid_region = (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && > + uint32_t mid_region = (intel_gen(xe) >= 20 && > !xe_has_vram(xe)) ? region1 : region2; > uint32_t bb; > enum blt_compression mid_compression = config->compression; > @@ -530,7 +530,7 @@ static void block_multicopy(int xe, > if (config->inplace) { > uint8_t pat_index = DEFAULT_PAT_INDEX; > > - if (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && config->compression) > + if (intel_gen(xe) >= 20 && config->compression) > pat_index = intel_get_pat_idx_uc_comp(xe); > > blt_set_object(&blt3.dst, mid->handle, dst->size, mid->region, > @@ -715,7 +715,7 @@ static void block_copy_test(int xe, > int tiling, width, height; > > > - if (intel_gen_legacy(dev_id) >= 20 && config->compression) > + if (intel_gen(xe) >= 20 && config->compression) > igt_require(HAS_FLATCCS(dev_id)); > > if (config->compression && !blt_block_copy_supports_compression(xe)) > diff --git a/tests/intel/xe_compute.c b/tests/intel/xe_compute.c > index 7b6c39c77..1cb86920f 100644 > --- a/tests/intel/xe_compute.c > +++ b/tests/intel/xe_compute.c > @@ -232,7 +232,7 @@ test_compute_kernel_loop(uint64_t loop_duration) > double elapse_time, lower_bound, upper_bound; > > fd = drm_open_driver(DRIVER_XE); > - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > + ip_ver = intel_graphics_ver(fd); > kernels = intel_compute_square_kernels; > > while (kernels->kernel) { > @@ -335,7 +335,7 @@ igt_check_supported_pipeline(void) > const struct intel_compute_kernels *kernels; > > fd = drm_open_driver(DRIVER_XE); > - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > + ip_ver = intel_graphics_ver(fd); > kernels = intel_compute_square_kernels; > drm_close_driver(fd); > > @@ -432,7 +432,7 @@ test_eu_busy(uint64_t duration_sec) > > fd = drm_open_driver(DRIVER_XE); > > - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > + ip_ver = intel_graphics_ver(fd); > kernels = intel_compute_square_kernels; > while (kernels->kernel) { > if (ip_ver == kernels->ip_ver) > @@ -518,7 +518,7 @@ int igt_main() > igt_fixture() { > xe = drm_open_driver(DRIVER_XE); > sriov_enabled = is_sriov_mode(xe); > - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(xe)); > + ip_ver = intel_graphics_ver(xe); > igt_store_ccs_mode(ccs_mode, ARRAY_SIZE(ccs_mode)); > } > > diff --git a/tests/intel/xe_copy_basic.c b/tests/intel/xe_copy_basic.c > index 55081f938..e37bad746 100644 > --- a/tests/intel/xe_copy_basic.c > +++ b/tests/intel/xe_copy_basic.c > @@ -261,7 +261,6 @@ const char *help_str = > int igt_main_args("b", NULL, help_str, opt_handler, NULL) > { > int fd; > - uint16_t dev_id; > struct igt_collection *set, *regions; > uint32_t region; > struct rect linear[] = { { 0, 0xfd, 1, MODE_BYTE }, > @@ -275,7 +274,6 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) > > igt_fixture() { > fd = drm_open_driver(DRIVER_XE); > - dev_id = intel_get_drm_devid(fd); > xe_device_get(fd); > set = xe_get_memory_region_set(fd, > DRM_XE_MEM_REGION_CLASS_SYSMEM, > @@ -295,7 +293,7 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) > for (int i = 0; i < ARRAY_SIZE(page); i++) { > igt_subtest_f("mem-page-copy-%u", page[i].width) { > igt_require(blt_has_mem_copy(fd)); > - igt_require(intel_get_device_info(dev_id)->graphics_ver >= 20); > + igt_require(intel_gen(fd) >= 20); > for_each_variation_r(regions, 1, set) { > region = igt_collection_get_value(regions, 0); > copy_test(fd, &page[i], MEM_COPY, region); > @@ -320,7 +318,7 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) > * till 0x3FFFF. > */ > if (linear[i].width > 0x3ffff && > - (intel_get_device_info(dev_id)->graphics_ver < 20)) > + (intel_gen(fd) < 20)) > igt_skip("Skipping: width exceeds 18-bit limit on gfx_ver < 20\n"); > igt_require(blt_has_mem_set(fd)); > for_each_variation_r(regions, 1, set) { > diff --git a/tests/intel/xe_debugfs.c b/tests/intel/xe_debugfs.c > index facb55854..4075b173a 100644 > --- a/tests/intel/xe_debugfs.c > +++ b/tests/intel/xe_debugfs.c > @@ -296,7 +296,6 @@ static void test_tile_dir(struct xe_device *xe_dev, uint8_t tile) > */ > static void test_info_read(struct xe_device *xe_dev) > { > - uint16_t devid = intel_get_drm_devid(xe_dev->fd); > struct drm_xe_query_config *config; > const char *name = "info"; > bool failed = false; > @@ -329,7 +328,7 @@ static void test_info_read(struct xe_device *xe_dev) > failed = true; > } > > - if (intel_gen_legacy(devid) < 20) { > + if (intel_gen(xe_dev->fd) < 20) { > val = -1; > > switch (config->info[DRM_XE_QUERY_CONFIG_VA_BITS]) { > diff --git a/tests/intel/xe_eudebug_online.c b/tests/intel/xe_eudebug_online.c > index f64b12b3f..961cf5afc 100644 > --- a/tests/intel/xe_eudebug_online.c > +++ b/tests/intel/xe_eudebug_online.c > @@ -400,9 +400,7 @@ static uint64_t eu_ctl(int debugfd, uint64_t client, > > static bool intel_gen_needs_resume_wa(int fd) > { > - const uint32_t id = intel_get_drm_devid(fd); > - > - return intel_gen_legacy(id) == 12 && intel_graphics_ver_legacy(id) < IP_VER(12, 55); > + return intel_gen(fd) == 12 && intel_graphics_ver(fd) < IP_VER(12, 55); > } > > static uint64_t eu_ctl_resume(int fd, int debugfd, uint64_t client, > @@ -1222,8 +1220,6 @@ static void run_online_client(struct xe_eudebug_client *c) > > static bool intel_gen_has_lockstep_eus(int fd) > { > - const uint32_t id = intel_get_drm_devid(fd); > - > /* > * Lockstep (or in some parlance, fused) EUs are pair of EUs > * that work in sync, supposedly same clock and same control flow. > @@ -1231,7 +1227,7 @@ static bool intel_gen_has_lockstep_eus(int fd) > * excepted into SIP. In this level, the hardware has only one attention > * thread bit for units. PVC is the first one without lockstepping. > */ > - return !(intel_graphics_ver_legacy(id) == IP_VER(12, 60) || intel_gen_legacy(id) >= 20); > + return !(intel_graphics_ver(fd) == IP_VER(12, 60) || intel_gen(fd) >= 20); > } > > static int query_attention_bitmask_size(int fd, int gt) > diff --git a/tests/intel/xe_exec_multi_queue.c b/tests/intel/xe_exec_multi_queue.c > index 1d416efc9..bf09efcc3 100644 > --- a/tests/intel/xe_exec_multi_queue.c > +++ b/tests/intel/xe_exec_multi_queue.c > @@ -1047,7 +1047,7 @@ int igt_main() > > igt_fixture() { > fd = drm_open_driver(DRIVER_XE); > - igt_require(intel_graphics_ver_legacy(intel_get_drm_devid(fd)) >= IP_VER(35, 0)); > + igt_require(intel_graphics_ver(fd) >= IP_VER(35, 0)); > } > > igt_subtest_f("sanity") > diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c > index 498ab42b7..9e6a96aa8 100644 > --- a/tests/intel/xe_exec_store.c > +++ b/tests/intel/xe_exec_store.c > @@ -55,8 +55,7 @@ static void store_dword_batch(struct data *data, uint64_t addr, int value) > data->addr = batch_addr; > } > > -static void cond_batch(struct data *data, uint64_t addr, int value, > - uint16_t dev_id) > +static void cond_batch(int fd, struct data *data, uint64_t addr, int value) > { > int b; > uint64_t batch_offset = (char *)&(data->batch) - (char *)data; > @@ -69,7 +68,7 @@ static void cond_batch(struct data *data, uint64_t addr, int value, > data->batch[b++] = sdi_addr; > data->batch[b++] = sdi_addr >> 32; > > - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) > data->batch[b++] = MI_MEM_FENCE | MI_WRITE_FENCE; > > data->batch[b++] = MI_CONDITIONAL_BATCH_BUFFER_END | MI_DO_COMPARE | 5 << 12 | 2; > @@ -112,8 +111,7 @@ static void persistance_batch(struct data *data, uint64_t addr) > * SUBTEST: basic-all > * Description: Test to verify store dword on all available engines. > */ > -static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci, > - uint16_t dev_id) > +static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci) > { > struct drm_xe_sync sync[2] = { > { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, }, > @@ -156,7 +154,7 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc > else if (inst_type == COND_BATCH) { > /* A random value where it stops at the below value. */ > value = 20 + random() % 10; > - cond_batch(data, addr, value, dev_id); > + cond_batch(fd, data, addr, value); > } > else > igt_assert_f(inst_type < 2, "Entered wrong inst_type.\n"); > @@ -416,23 +414,21 @@ int igt_main() > { > struct drm_xe_engine_class_instance *hwe; > int fd; > - uint16_t dev_id; > struct drm_xe_engine *engine; > > igt_fixture() { > fd = drm_open_driver(DRIVER_XE); > xe_device_get(fd); > - dev_id = intel_get_drm_devid(fd); > } > > igt_subtest("basic-store") { > engine = xe_engine(fd, 1); > - basic_inst(fd, STORE, &engine->instance, dev_id); > + basic_inst(fd, STORE, &engine->instance); > } > > igt_subtest("basic-cond-batch") { > engine = xe_engine(fd, 1); > - basic_inst(fd, COND_BATCH, &engine->instance, dev_id); > + basic_inst(fd, COND_BATCH, &engine->instance); > } > > igt_subtest_with_dynamic("basic-all") { > @@ -441,7 +437,7 @@ int igt_main() > xe_engine_class_string(hwe->engine_class), > hwe->engine_instance, > hwe->gt_id); > - basic_inst(fd, STORE, hwe, dev_id); > + basic_inst(fd, STORE, hwe); > } > } > > diff --git a/tests/intel/xe_fault_injection.c b/tests/intel/xe_fault_injection.c > index 8adc5c15a..57c5a5579 100644 > --- a/tests/intel/xe_fault_injection.c > +++ b/tests/intel/xe_fault_injection.c > @@ -486,12 +486,12 @@ vm_bind_fail(int fd, const char pci_slot[], const char function_name[]) > * @xe_oa_alloc_regs: xe_oa_alloc_regs > */ > static void > -oa_add_config_fail(int fd, int sysfs, int devid, > +oa_add_config_fail(int fd, int sysfs, > const char pci_slot[], const char function_name[]) > { > char path[512]; > uint64_t config_id; > -#define SAMPLE_MUX_REG (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0) ? \ > +#define SAMPLE_MUX_REG (intel_graphics_ver(fd) >= IP_VER(20, 0) ? \ > 0x13000 /* PES* */ : 0x9888 /* NOA_WRITE */) > > uint32_t mux_regs[] = { SAMPLE_MUX_REG, 0x0 }; > @@ -557,7 +557,6 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) > int fd, sysfs; > struct drm_xe_engine_class_instance *hwe; > struct fault_injection_params fault_params; > - static uint32_t devid; > char pci_slot[NAME_MAX]; > bool is_vf_device; > const struct section { > @@ -627,7 +626,6 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) > igt_fixture() { > igt_require(fail_function_injection_enabled()); > fd = drm_open_driver(DRIVER_XE); > - devid = intel_get_drm_devid(fd); > sysfs = igt_sysfs_open(fd); > igt_device_get_pci_slot_name(fd, pci_slot); > setup_injection_fault(&default_fault_params); > @@ -659,7 +657,7 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) > > for (const struct section *s = oa_add_config_fail_functions; s->name; s++) > igt_subtest_f("oa-add-config-fail-%s", s->name) > - oa_add_config_fail(fd, sysfs, devid, pci_slot, s->name); > + oa_add_config_fail(fd, sysfs, pci_slot, s->name); > > igt_fixture() { > igt_kmod_unbind("xe", pci_slot); > diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c > index 5c112351f..e37d00d2c 100644 > --- a/tests/intel/xe_intel_bb.c > +++ b/tests/intel/xe_intel_bb.c > @@ -710,7 +710,7 @@ static void do_intel_bb_blit(struct buf_ops *bops, int loops, uint32_t tiling) > int i, fails = 0, xe = buf_ops_get_fd(bops); > > /* We'll fix it for gen2/3 later. */ > - igt_require(intel_gen_legacy(intel_get_drm_devid(xe)) > 3); > + igt_require(intel_gen(xe) > 3); > > for (i = 0; i < loops; i++) > fails += __do_intel_bb_blit(bops, tiling); > @@ -878,10 +878,9 @@ static int render(struct buf_ops *bops, uint32_t tiling, > int xe = buf_ops_get_fd(bops); > uint32_t fails = 0; > char name[128]; > - uint32_t devid = intel_get_drm_devid(xe); > igt_render_copyfunc_t render_copy = NULL; > > - igt_debug("%s() gen: %d\n", __func__, intel_gen_legacy(devid)); > + igt_debug("%s() gen: %d\n", __func__, intel_gen(xe)); > > ibb = intel_bb_create(xe, PAGE_SIZE); > > @@ -1041,7 +1040,7 @@ int igt_main_args("dpib", NULL, help_str, opt_handler, NULL) > do_intel_bb_blit(bops, 3, I915_TILING_X); > > igt_subtest("intel-bb-blit-y") { > - igt_require(intel_gen_legacy(intel_get_drm_devid(xe)) >= 6); > + igt_require(intel_gen(xe) >= 6); > do_intel_bb_blit(bops, 3, I915_TILING_Y); > } > > diff --git a/tests/intel/xe_multigpu_svm.c b/tests/intel/xe_multigpu_svm.c > index ab800476e..2c6f81a10 100644 > --- a/tests/intel/xe_multigpu_svm.c > +++ b/tests/intel/xe_multigpu_svm.c > @@ -396,7 +396,6 @@ static void batch_init(int fd, uint32_t vm, uint64_t src_addr, > uint64_t batch_addr; > void *batch; > uint32_t *cmd; > - uint16_t dev_id = intel_get_drm_devid(fd); > uint32_t mocs_index = intel_get_uc_mocs_index(fd); > int i = 0; > > @@ -412,7 +411,7 @@ static void batch_init(int fd, uint32_t vm, uint64_t src_addr, > cmd[i++] = upper_32_bits(src_addr); > cmd[i++] = lower_32_bits(dst_addr); > cmd[i++] = upper_32_bits(dst_addr); > - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) { > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { > cmd[i++] = mocs_index << XE2_MEM_COPY_SRC_MOCS_SHIFT | mocs_index; > } else { > cmd[i++] = mocs_index << GEN12_MEM_COPY_MOCS_SHIFT | mocs_index; > diff --git a/tests/intel/xe_pat.c b/tests/intel/xe_pat.c > index 96302ad3a..96d544160 100644 > --- a/tests/intel/xe_pat.c > +++ b/tests/intel/xe_pat.c > @@ -119,14 +119,13 @@ static int xe_fetch_pat_sw_config(int fd, struct intel_pat_cache *pat_sw_config) > */ > static void pat_sanity(int fd) > { > - uint16_t dev_id = intel_get_drm_devid(fd); > struct intel_pat_cache pat_sw_config = {}; > int32_t parsed; > bool has_uc_comp = false, has_wt = false; > > parsed = xe_fetch_pat_sw_config(fd, &pat_sw_config); > > - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) { > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { > for (int i = 0; i < parsed; i++) { > uint32_t pat = pat_sw_config.entries[i].pat; > if (pat_sw_config.entries[i].rsvd) > @@ -898,7 +897,6 @@ static void display_vs_wb_transient(int fd) > 3, /* UC (baseline) */ > 6, /* L3:XD (uncompressed) */ > }; > - uint32_t devid = intel_get_drm_devid(fd); > igt_render_copyfunc_t render_copy = NULL; > igt_crc_t ref_crc = {}, crc = {}; > igt_plane_t *primary; > @@ -914,7 +912,7 @@ static void display_vs_wb_transient(int fd) > int bpp = 32; > int i; > > - igt_require(intel_get_device_info(devid)->graphics_ver >= 20); > + igt_require(intel_gen(fd) >= 20); > > render_copy = igt_get_render_copyfunc(fd); > igt_require(render_copy); > @@ -1015,10 +1013,8 @@ static uint8_t get_pat_idx_uc(int fd, bool *compressed) > > static uint8_t get_pat_idx_wt(int fd, bool *compressed) > { > - uint16_t dev_id = intel_get_drm_devid(fd); > - > if (compressed) > - *compressed = intel_get_device_info(dev_id)->graphics_ver >= 20; > + *compressed = intel_gen(fd) >= 20; > > return intel_get_pat_idx_wt(fd); > } > @@ -1328,7 +1324,7 @@ int igt_main_args("V", NULL, help_str, opt_handler, NULL) > bo_comp_disable_bind(fd); > > igt_subtest_with_dynamic("pat-index-xelp") { > - igt_require(intel_graphics_ver_legacy(dev_id) <= IP_VER(12, 55)); > + igt_require(intel_graphics_ver(fd) <= IP_VER(12, 55)); > subtest_pat_index_modes_with_regions(fd, xelp_pat_index_modes, > ARRAY_SIZE(xelp_pat_index_modes)); > } > @@ -1346,10 +1342,10 @@ int igt_main_args("V", NULL, help_str, opt_handler, NULL) > } > > igt_subtest_with_dynamic("pat-index-xe2") { > - igt_require(intel_get_device_info(dev_id)->graphics_ver >= 20); > + igt_require(intel_gen(fd) >= 20); > igt_assert(HAS_FLATCCS(dev_id)); > > - if (intel_graphics_ver_legacy(dev_id) == IP_VER(20, 1)) > + if (intel_graphics_ver(fd) == IP_VER(20, 1)) > subtest_pat_index_modes_with_regions(fd, bmg_g21_pat_index_modes, > ARRAY_SIZE(bmg_g21_pat_index_modes)); > else > diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c > index 318a9994a..ae505a5d7 100644 > --- a/tests/intel/xe_query.c > +++ b/tests/intel/xe_query.c > @@ -380,7 +380,7 @@ test_query_gt_topology(int fd) > } > > /* sanity check EU type */ > - if (IS_PONTEVECCHIO(dev_id) || intel_gen_legacy(dev_id) >= 20) { > + if (IS_PONTEVECCHIO(dev_id) || intel_gen(fd) >= 20) { > igt_assert(topo_types & (1 << DRM_XE_TOPO_SIMD16_EU_PER_DSS)); > igt_assert_eq(topo_types & (1 << DRM_XE_TOPO_EU_PER_DSS), 0); > } else { > @@ -428,7 +428,7 @@ test_query_gt_topology_l3_bank_mask(int fd) > } > > igt_info(" count: %d\n", count); > - if (intel_get_device_info(dev_id)->graphics_ver < 20) { > + if (intel_gen(fd) < 20) { > igt_assert_lt(0, count); > } > > diff --git a/tests/intel/xe_render_copy.c b/tests/intel/xe_render_copy.c > index 0a6ae9ca2..a3976b5f1 100644 > --- a/tests/intel/xe_render_copy.c > +++ b/tests/intel/xe_render_copy.c > @@ -136,7 +136,7 @@ static int compare_bufs(struct intel_buf *buf1, struct intel_buf *buf2, > static bool buf_is_aux_compressed(struct buf_ops *bops, struct intel_buf *buf) > { > int xe = buf_ops_get_fd(bops); > - unsigned int gen = intel_gen_legacy(buf_ops_get_devid(bops)); > + unsigned int gen = intel_gen(buf_ops_get_fd(bops)); > uint32_t ccs_size; > uint8_t *ptr; > bool is_compressed = false; > -- > 2.43.0 > -- Matt Roper Graphics Software Engineer Linux GPU Platform Enablement Intel Corporation ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 3/3] intel/xe: use fd-based graphics/IP version helpers 2026-02-04 18:56 ` Matt Roper @ 2026-02-25 8:51 ` Wang, X 2026-02-25 23:18 ` Matt Roper 0 siblings, 1 reply; 12+ messages in thread From: Wang, X @ 2026-02-25 8:51 UTC (permalink / raw) To: Matt Roper Cc: igt-dev, Kamil Konieczny, Zbigniew Kempczyński, Ravi Kumar V On 2/4/2026 10:56, Matt Roper wrote: > On Thu, Jan 22, 2026 at 07:15:30AM +0000, Xin Wang wrote: >> Switch Xe‑related libraries and tests to use fd‑based intel_gen() and >> intel_graphics_ver() instead of PCI ID lookups, keeping behavior aligned >> with Xe IP disaggregation. > You might want to mention the specific special cases that aren't > transitioned over and will remain on pciid-based lookup so that > reviewers can grep the resulting tree and make sure nothing was missed. > I just did a grep and it seems like there are still quite a few tests > using the pciid-based lookup which probably don't need to be; those > might be oversights: Regarding the i915-only tests: this was intentional. For i915 devices, the new fd-based helpers simply fall back to the same pciid-based lookup internally, so switching those callers would add overhead without any real benefit. Additionally, much of the i915-specific code is in maintenance mode and not actively being updated, so I preferred to leave it as-is. More broadly, there are also cases (e.g. standalone tools that run before the DRM driver is loaded) where the pciid-based approach is the only viable option. So I think it makes sense to keep intel_gen_from_pciid() / intel_graphics_ver_from_pciid() as legitimate APIs rather than treating them as purely transitional. Does that reasoning make sense to you, or do you still think we should aim for a full migration across all callers? > $ grep -Irl intel_gen_legacy tests/ > tests/prime_vgem.c > tests/intel/gem_exec_fair.c > tests/intel/gen7_exec_parse.c > tests/intel/gem_linear_blits.c > tests/intel/gem_evict_alignment.c > tests/intel/gem_exec_store.c > tests/intel/gem_exec_flush.c > tests/intel/i915_getparams_basic.c > tests/intel/i915_pm_rpm.c > tests/intel/gem_mmap_gtt.c > tests/intel/gem_softpin.c > tests/intel/gem_sync.c > tests/intel/gem_tiled_fence_blits.c > tests/intel/gem_close_race.c > tests/intel/gem_tiling_max_stride.c > tests/intel/gem_ctx_isolation.c > tests/intel/gem_exec_nop.c > tests/intel/gem_evict_everything.c > tests/intel/perf_pmu.c > tests/intel/sysfs_timeslice_duration.c > tests/intel/gem_ctx_shared.c > tests/intel/gem_ctx_engines.c > tests/intel/gem_exec_fence.c > tests/intel/gem_exec_balancer.c > tests/intel/gem_exec_latency.c > tests/intel/gem_exec_schedule.c > tests/intel/gem_gtt_hog.c > tests/intel/gem_blits.c > tests/intel/gem_exec_await.c > tests/intel/gem_exec_capture.c > tests/intel/gem_ringfill.c > tests/intel/perf.c > tests/intel/gem_exec_params.c > tests/intel/sysfs_preempt_timeout.c > tests/intel/gem_exec_suspend.c > tests/intel/gem_exec_reloc.c > tests/intel/gem_exec_whisper.c > tests/intel/gem_exec_gttfill.c > tests/intel/gem_exec_parallel.c > tests/intel/gem_watchdog.c > tests/intel/gem_exec_big.c > tests/intel/gem_set_tiling_vs_blt.c > tests/intel/gem_render_copy.c > tests/intel/gen9_exec_parse.c > tests/intel/gem_vm_create.c > tests/intel/i915_pm_rc6_residency.c > tests/intel/i915_module_load.c > tests/intel/gem_streaming_writes.c > tests/intel/gem_fenced_exec_thrash.c > tests/intel/gem_workarounds.c > tests/intel/gem_ctx_create.c > tests/intel/i915_pm_sseu.c > tests/intel/gem_concurrent_all.c > tests/intel/gem_ctx_sseu.c > tests/intel/gem_read_read_speed.c > tests/intel/api_intel_bb.c > tests/intel/gem_bad_reloc.c > tests/intel/gem_media_vme.c > tests/intel/gem_exec_async.c > tests/intel/gem_userptr_blits.c > tests/intel/gem_eio.c > > An alternate approach would be to structure this series as: > > - Create the "legacy" functions as a duplicate of the existing > pciid-based functions and explicitly convert the special cases that > we expect to remain on PCI ID. > > - Change the signature of intel_graphics_ver / intel_gen and all > remaining callsites. > > That will ensure everything gets converted over (otherwise there will be > a build failure because anything not converted will be trying to use the > wrong function signature). It also makes it a little bit easier to > directly review the special cases and make sure they all truly need to > be special cases. > > > As mentioned on the first patch, if you're using something like > Coccinelle to do these conversions, providing the semantic patch(es) > used in the commit message would be helpful. Regarding the suggestion to use Coccinelle: some of the changes in this patch cannot be handled automatically by a script because they involve function signature modifications. For example: -static void basic_inst(int fd, int inst_type, - struct drm_xe_engine_class_instance *eci, - uint16_t dev_id) +static void basic_inst(int fd, int inst_type, + struct drm_xe_engine_class_instance *eci) This requires updating the function definition, its internal uses of dev_id, and all call sites simultaneously, which is beyond what Coccinelle can handle across function boundaries. Xin > > Matt > >> Cc: Kamil Konieczny <kamil.konieczny@linux.intel.com> >> Cc: Matt Roper <matthew.d.roper@intel.com> >> Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> >> Cc: Ravi Kumar V <ravi.kumar.vodapalli@intel.com> >> Signed-off-by: Xin Wang <x.wang@intel.com> >> --- >> lib/gpgpu_shader.c | 5 ++- >> lib/gpu_cmds.c | 21 ++++++----- >> lib/intel_batchbuffer.c | 14 +++----- >> lib/intel_blt.c | 21 +++++------ >> lib/intel_blt.h | 2 +- >> lib/intel_bufops.c | 10 +++--- >> lib/intel_common.c | 2 +- >> lib/intel_compute.c | 6 ++-- >> lib/intel_mocs.c | 48 ++++++++++++++------------ >> lib/intel_pat.c | 19 +++++----- >> lib/rendercopy_gen9.c | 22 ++++++------ >> lib/xe/xe_legacy.c | 2 +- >> lib/xe/xe_spin.c | 4 +-- >> lib/xe/xe_sriov_provisioning.c | 4 +-- >> tests/intel/api_intel_allocator.c | 2 +- >> tests/intel/kms_ccs.c | 13 +++---- >> tests/intel/kms_fbcon_fbt.c | 2 +- >> tests/intel/kms_frontbuffer_tracking.c | 6 ++-- >> tests/intel/kms_pipe_stress.c | 4 +-- >> tests/intel/xe_ccs.c | 16 ++++----- >> tests/intel/xe_compute.c | 8 ++--- >> tests/intel/xe_copy_basic.c | 6 ++-- >> tests/intel/xe_debugfs.c | 3 +- >> tests/intel/xe_eudebug_online.c | 8 ++--- >> tests/intel/xe_exec_multi_queue.c | 2 +- >> tests/intel/xe_exec_store.c | 18 ++++------ >> tests/intel/xe_fault_injection.c | 8 ++--- >> tests/intel/xe_intel_bb.c | 7 ++-- >> tests/intel/xe_multigpu_svm.c | 3 +- >> tests/intel/xe_pat.c | 16 ++++----- >> tests/intel/xe_query.c | 4 +-- >> tests/intel/xe_render_copy.c | 2 +- >> 32 files changed, 135 insertions(+), 173 deletions(-) >> >> diff --git a/lib/gpgpu_shader.c b/lib/gpgpu_shader.c >> index 767bddb7b..09a7f5c5e 100644 >> --- a/lib/gpgpu_shader.c >> +++ b/lib/gpgpu_shader.c >> @@ -274,11 +274,10 @@ void gpgpu_shader_exec(struct intel_bb *ibb, >> struct gpgpu_shader *gpgpu_shader_create(int fd) >> { >> struct gpgpu_shader *shdr = calloc(1, sizeof(struct gpgpu_shader)); >> - const struct intel_device_info *info; >> + unsigned ip_ver = intel_graphics_ver(fd); >> >> igt_assert(shdr); >> - info = intel_get_device_info(intel_get_drm_devid(fd)); >> - shdr->gen_ver = 100 * info->graphics_ver + info->graphics_rel; >> + shdr->gen_ver = 100 * (ip_ver >> 8) + (ip_ver & 0xff); >> shdr->max_size = 16 * 4; >> shdr->code = malloc(4 * shdr->max_size); >> shdr->labels = igt_map_create(igt_map_hash_32, igt_map_equal_32); >> diff --git a/lib/gpu_cmds.c b/lib/gpu_cmds.c >> index ab46fe0de..6842af1ad 100644 >> --- a/lib/gpu_cmds.c >> +++ b/lib/gpu_cmds.c >> @@ -313,14 +313,13 @@ fill_binding_table(struct intel_bb *ibb, struct intel_buf *buf) >> { >> uint32_t binding_table_offset; >> uint32_t *binding_table; >> - uint32_t devid = intel_get_drm_devid(ibb->fd); >> >> intel_bb_ptr_align(ibb, 64); >> binding_table_offset = intel_bb_offset(ibb); >> binding_table = intel_bb_ptr(ibb); >> intel_bb_ptr_add(ibb, 64); >> >> - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { >> + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) { >> /* >> * Up until now, SURFACEFORMAT_R8_UNROM was used regardless of the 'bpp' value. >> * For bpp 32 this results in a surface that is 4x narrower than expected. However >> @@ -342,13 +341,13 @@ fill_binding_table(struct intel_bb *ibb, struct intel_buf *buf) >> igt_assert_f(false, >> "Surface state for bpp = %u not implemented", >> buf->bpp); >> - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 50)) { >> + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(12, 50)) { >> binding_table[0] = xehp_fill_surface_state(ibb, buf, >> SURFACEFORMAT_R8_UNORM, 1); >> - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(9, 0)) { >> + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(9, 0)) { >> binding_table[0] = gen9_fill_surface_state(ibb, buf, >> SURFACEFORMAT_R8_UNORM, 1); >> - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(8, 0)) { >> + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(8, 0)) { >> binding_table[0] = gen8_fill_surface_state(ibb, buf, >> SURFACEFORMAT_R8_UNORM, 1); >> } else { >> @@ -867,7 +866,7 @@ gen_emit_media_object(struct intel_bb *ibb, >> /* inline data (xoffset, yoffset) */ >> intel_bb_out(ibb, xoffset); >> intel_bb_out(ibb, yoffset); >> - if (intel_gen_legacy(ibb->devid) >= 8 && !IS_CHERRYVIEW(ibb->devid)) >> + if (intel_gen(ibb->fd) >= 8 && !IS_CHERRYVIEW(ibb->devid)) >> gen8_emit_media_state_flush(ibb); >> } >> >> @@ -1011,7 +1010,7 @@ void >> xehp_emit_state_compute_mode(struct intel_bb *ibb, bool vrt) >> { >> >> - uint32_t dword_length = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0); >> + uint32_t dword_length = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0); >> >> intel_bb_out(ibb, XEHP_STATE_COMPUTE_MODE | dword_length); >> intel_bb_out(ibb, vrt ? (0x10001) << 10 : 0); /* Enable variable number of threads */ >> @@ -1042,7 +1041,7 @@ xehp_emit_state_base_address(struct intel_bb *ibb) >> intel_bb_out(ibb, 0); >> >> /* stateless data port */ >> - tmp = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0) ? 0 : BASE_ADDRESS_MODIFY; >> + tmp = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0) ? 0 : BASE_ADDRESS_MODIFY; >> intel_bb_out(ibb, 0 | tmp); //dw3 >> >> /* surface */ >> @@ -1068,7 +1067,7 @@ xehp_emit_state_base_address(struct intel_bb *ibb) >> /* dynamic state buffer size */ >> intel_bb_out(ibb, ALIGN(ibb->size, 1 << 12) | 1); //dw13 >> /* indirect object buffer size */ >> - if (intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0)) //dw14 >> + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) //dw14 >> intel_bb_out(ibb, 0); >> else >> intel_bb_out(ibb, 0xfffff000 | 1); >> @@ -1115,7 +1114,7 @@ xehp_emit_compute_walk(struct intel_bb *ibb, >> else >> mask = (1 << mask) - 1; >> >> - dword_length = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0) ? 0x26 : 0x25; >> + dword_length = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0) ? 0x26 : 0x25; >> intel_bb_out(ibb, XEHP_COMPUTE_WALKER | dword_length); >> >> intel_bb_out(ibb, 0); /* debug object */ //dw1 >> @@ -1155,7 +1154,7 @@ xehp_emit_compute_walk(struct intel_bb *ibb, >> intel_bb_out(ibb, 0); //dw16 >> intel_bb_out(ibb, 0); //dw17 >> >> - if (intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0)) //Xe2:dw18 >> + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) //Xe2:dw18 >> intel_bb_out(ibb, 0); >> /* Interface descriptor data */ >> for (int i = 0; i < 8; i++) { //dw18-25 (Xe2:dw19-26) >> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c >> index f418e7981..4f52e7b6a 100644 >> --- a/lib/intel_batchbuffer.c >> +++ b/lib/intel_batchbuffer.c >> @@ -329,11 +329,7 @@ void igt_blitter_copy(int fd, >> uint32_t dst_x, uint32_t dst_y, >> uint64_t dst_size) >> { >> - uint32_t devid; >> - >> - devid = intel_get_drm_devid(fd); >> - >> - if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 60)) >> + if (intel_graphics_ver(fd) >= IP_VER(12, 60)) >> igt_blitter_fast_copy__raw(fd, ahnd, ctx, NULL, >> src_handle, src_delta, >> src_stride, src_tiling, >> @@ -410,7 +406,7 @@ void igt_blitter_src_copy(int fd, >> uint32_t batch_handle; >> uint32_t src_pitch, dst_pitch; >> uint32_t dst_reloc_offset, src_reloc_offset; >> - uint32_t gen = intel_gen_legacy(intel_get_drm_devid(fd)); >> + uint32_t gen = intel_gen(fd); >> uint64_t batch_offset, src_offset, dst_offset; >> const bool has_64b_reloc = gen >= 8; >> int i = 0; >> @@ -669,7 +665,7 @@ igt_render_copyfunc_t igt_get_render_copyfunc(int fd) >> copy = mtl_render_copyfunc; >> else if (IS_DG2(devid)) >> copy = gen12p71_render_copyfunc; >> - else if (intel_gen_legacy(devid) >= 20) >> + else if (intel_gen(fd) >= 20) >> copy = xe2_render_copyfunc; >> else if (IS_GEN12(devid)) >> copy = gen12_render_copyfunc; >> @@ -911,7 +907,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg, >> igt_assert(ibb); >> >> ibb->devid = intel_get_drm_devid(fd); >> - ibb->gen = intel_gen_legacy(ibb->devid); >> + ibb->gen = intel_gen(fd); >> ibb->ctx = ctx; >> >> ibb->fd = fd; >> @@ -1089,7 +1085,7 @@ struct intel_bb *intel_bb_create_with_allocator(int fd, uint32_t ctx, uint32_t v >> >> static bool aux_needs_softpin(int fd) >> { >> - return intel_gen_legacy(intel_get_drm_devid(fd)) >= 12; >> + return intel_gen(fd) >= 12; >> } >> >> static bool has_ctx_cfg(struct intel_bb *ibb) >> diff --git a/lib/intel_blt.c b/lib/intel_blt.c >> index 673f204b0..7ae04fccd 100644 >> --- a/lib/intel_blt.c >> +++ b/lib/intel_blt.c >> @@ -997,7 +997,7 @@ uint64_t emit_blt_block_copy(int fd, >> uint64_t bb_pos, >> bool emit_bbe) >> { >> - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); >> + unsigned int ip_ver = intel_graphics_ver(fd); >> struct gen12_block_copy_data data = {}; >> struct gen12_block_copy_data_ext dext = {}; >> uint64_t dst_offset, src_offset, bb_offset; >> @@ -1285,7 +1285,7 @@ uint64_t emit_blt_ctrl_surf_copy(int fd, >> uint64_t bb_pos, >> bool emit_bbe) >> { >> - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); >> + unsigned int ip_ver = intel_graphics_ver(fd); >> union ctrl_surf_copy_data data = { }; >> size_t data_sz; >> uint64_t dst_offset, src_offset, bb_offset, alignment; >> @@ -1705,7 +1705,7 @@ uint64_t emit_blt_fast_copy(int fd, >> uint64_t bb_pos, >> bool emit_bbe) >> { >> - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); >> + unsigned int ip_ver = intel_graphics_ver(fd); >> struct gen12_fast_copy_data data = {}; >> uint64_t dst_offset, src_offset, bb_offset; >> uint32_t bbe = MI_BATCH_BUFFER_END; >> @@ -1972,11 +1972,10 @@ void blt_mem_copy_init(int fd, struct blt_mem_copy_data *mem, >> static void dump_bb_mem_copy_cmd(int fd, struct xe_mem_copy_data *data) >> { >> uint32_t *cmd = (uint32_t *) data; >> - uint32_t devid = intel_get_drm_devid(fd); >> >> igt_info("BB details:\n"); >> >> - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { >> + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { >> igt_info(" dw00: [%08x] <client: 0x%x, opcode: 0x%x, length: %d> " >> "[copy type: %d, mode: %d]\n", >> cmd[0], data->dw00.xe2.client, data->dw00.xe2.opcode, >> @@ -2006,7 +2005,7 @@ static void dump_bb_mem_copy_cmd(int fd, struct xe_mem_copy_data *data) >> cmd[7], data->dw07.dst_address_lo); >> igt_info(" dw08: [%08x] dst offset hi (0x%x)\n", >> cmd[8], data->dw08.dst_address_hi); >> - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { >> + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { >> igt_info(" dw09: [%08x] mocs <dst: 0x%x, src: 0x%x>\n", >> cmd[9], data->dw09.xe2.dst_mocs, >> data->dw09.xe2.src_mocs); >> @@ -2025,7 +2024,6 @@ static uint64_t emit_blt_mem_copy(int fd, uint64_t ahnd, >> uint64_t dst_offset, src_offset, shift; >> uint32_t width, height, width_max, height_max, remain; >> uint32_t bbe = MI_BATCH_BUFFER_END; >> - uint32_t devid = intel_get_drm_devid(fd); >> uint8_t *bb; >> >> if (mem->mode == MODE_BYTE) { >> @@ -2049,7 +2047,7 @@ static uint64_t emit_blt_mem_copy(int fd, uint64_t ahnd, >> width = mem->src.width; >> height = mem->dst.height; >> >> - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { >> + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { >> data.dw00.xe2.client = 0x2; >> data.dw00.xe2.opcode = 0x5a; >> data.dw00.xe2.length = 8; >> @@ -2231,7 +2229,6 @@ static void emit_blt_mem_set(int fd, uint64_t ahnd, >> int b; >> uint32_t *batch; >> uint32_t value; >> - uint32_t devid = intel_get_drm_devid(fd); >> >> dst_offset = get_offset_pat_index(ahnd, mem->dst.handle, mem->dst.size, >> 0, mem->dst.pat_index); >> @@ -2246,7 +2243,7 @@ static void emit_blt_mem_set(int fd, uint64_t ahnd, >> batch[b++] = mem->dst.pitch - 1; >> batch[b++] = dst_offset; >> batch[b++] = dst_offset << 32; >> - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) >> + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) >> batch[b++] = value | (mem->dst.mocs_index << 3); >> else >> batch[b++] = value | mem->dst.mocs_index; >> @@ -2364,7 +2361,7 @@ blt_create_object(const struct blt_copy_data *blt, uint32_t region, >> if (create_mapping && region != system_memory(blt->fd)) >> flags |= DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM; >> >> - if (intel_gen_legacy(intel_get_drm_devid(blt->fd)) >= 20 && compression) { >> + if (intel_gen(blt->fd) >= 20 && compression) { >> pat_index = intel_get_pat_idx_uc_comp(blt->fd); >> cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; >> } >> @@ -2590,7 +2587,7 @@ void blt_surface_get_flatccs_data(int fd, >> cpu_caching = __xe_default_cpu_caching(fd, sysmem, 0); >> ccs_bo_size = ALIGN(ccssize, xe_get_default_alignment(fd)); >> >> - if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 20 && obj->compression) { >> + if (intel_gen(fd) >= 20 && obj->compression) { >> comp_pat_index = intel_get_pat_idx_uc_comp(fd); >> cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; >> } >> diff --git a/lib/intel_blt.h b/lib/intel_blt.h >> index a98a34e95..feba94ebb 100644 >> --- a/lib/intel_blt.h >> +++ b/lib/intel_blt.h >> @@ -52,7 +52,7 @@ >> #include "igt.h" >> #include "intel_cmds_info.h" >> >> -#define CCS_RATIO(fd) (intel_gen_legacy(intel_get_drm_devid(fd)) >= 20 ? 512 : 256) >> +#define CCS_RATIO(fd) (intel_gen(fd) >= 20 ? 512 : 256) >> #define GEN12_MEM_COPY_MOCS_SHIFT 25 >> #define XE2_MEM_COPY_SRC_MOCS_SHIFT 28 >> #define XE2_MEM_COPY_DST_MOCS_SHIFT 3 >> diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c >> index ea3742f1e..a2adbf9ef 100644 >> --- a/lib/intel_bufops.c >> +++ b/lib/intel_bufops.c >> @@ -1063,7 +1063,7 @@ static void __intel_buf_init(struct buf_ops *bops, >> } else { >> uint16_t cpu_caching = __xe_default_cpu_caching(bops->fd, region, 0); >> >> - if (intel_gen_legacy(bops->devid) >= 20 && compression) >> + if (intel_gen(bops->fd) >= 20 && compression) >> cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; >> >> bo_size = ALIGN(bo_size, xe_get_default_alignment(bops->fd)); >> @@ -1106,7 +1106,7 @@ void intel_buf_init(struct buf_ops *bops, >> uint64_t region; >> uint8_t pat_index = DEFAULT_PAT_INDEX; >> >> - if (compression && intel_gen_legacy(bops->devid) >= 20) >> + if (compression && intel_gen(bops->fd) >= 20) >> pat_index = intel_get_pat_idx_uc_comp(bops->fd); >> >> region = bops->driver == INTEL_DRIVER_I915 ? I915_SYSTEM_MEMORY : >> @@ -1132,7 +1132,7 @@ void intel_buf_init_in_region(struct buf_ops *bops, >> { >> uint8_t pat_index = DEFAULT_PAT_INDEX; >> >> - if (compression && intel_gen_legacy(bops->devid) >= 20) >> + if (compression && intel_gen(bops->fd) >= 20) >> pat_index = intel_get_pat_idx_uc_comp(bops->fd); >> >> __intel_buf_init(bops, 0, buf, width, height, bpp, alignment, >> @@ -1203,7 +1203,7 @@ void intel_buf_init_using_handle_and_size(struct buf_ops *bops, >> igt_assert(handle); >> igt_assert(size); >> >> - if (compression && intel_gen_legacy(bops->devid) >= 20) >> + if (compression && intel_gen(bops->fd) >= 20) >> pat_index = intel_get_pat_idx_uc_comp(bops->fd); >> >> __intel_buf_init(bops, handle, buf, width, height, bpp, alignment, >> @@ -1758,7 +1758,7 @@ static struct buf_ops *__buf_ops_create(int fd, bool check_idempotency) >> igt_assert(bops); >> >> devid = intel_get_drm_devid(fd); >> - generation = intel_gen_legacy(devid); >> + generation = intel_gen(fd); >> >> /* Predefined settings: see intel_device_info? */ >> for (int i = 0; i < ARRAY_SIZE(buf_ops_arr); i++) { >> diff --git a/lib/intel_common.c b/lib/intel_common.c >> index cd1019bfe..407d53f77 100644 >> --- a/lib/intel_common.c >> +++ b/lib/intel_common.c >> @@ -91,7 +91,7 @@ bool is_intel_region_compressible(int fd, uint64_t region) >> return true; >> >> /* Integrated Xe2+ supports compression on system memory */ >> - if (intel_gen_legacy(devid) >= 20 && !is_dgfx && is_intel_system_region(fd, region)) >> + if (intel_gen(fd) >= 20 && !is_dgfx && is_intel_system_region(fd, region)) >> return true; >> >> /* Discrete supports compression on vram */ >> diff --git a/lib/intel_compute.c b/lib/intel_compute.c >> index 1734c1649..66156d194 100644 >> --- a/lib/intel_compute.c >> +++ b/lib/intel_compute.c >> @@ -2284,7 +2284,7 @@ static bool __run_intel_compute_kernel(int fd, >> struct user_execenv *user, >> enum execenv_alloc_prefs alloc_prefs) >> { >> - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); >> + unsigned int ip_ver = intel_graphics_ver(fd); >> int batch; >> const struct intel_compute_kernels *kernel_entries = intel_compute_square_kernels, *kernels; >> enum intel_driver driver = get_intel_driver(fd); >> @@ -2749,7 +2749,7 @@ static bool __run_intel_compute_kernel_preempt(int fd, >> bool threadgroup_preemption, >> enum execenv_alloc_prefs alloc_prefs) >> { >> - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); >> + unsigned int ip_ver = ip_ver = intel_graphics_ver(fd); >> int batch; >> const struct intel_compute_kernels *kernel_entries = intel_compute_square_kernels, *kernels; >> enum intel_driver driver = get_intel_driver(fd); >> @@ -2803,7 +2803,7 @@ static bool __run_intel_compute_kernel_preempt(int fd, >> */ >> bool xe_kernel_preempt_check(int fd, enum xe_compute_preempt_type required_preempt) >> { >> - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); >> + unsigned int ip_ver = ip_ver = intel_graphics_ver(fd); >> int batch = find_preempt_batch(ip_ver); >> >> if (batch < 0) { >> diff --git a/lib/intel_mocs.c b/lib/intel_mocs.c >> index f21c2bf09..b9ea43c7c 100644 >> --- a/lib/intel_mocs.c >> +++ b/lib/intel_mocs.c >> @@ -27,8 +27,8 @@ struct drm_intel_mocs_index { >> >> static void get_mocs_index(int fd, struct drm_intel_mocs_index *mocs) >> { >> - uint16_t devid = intel_get_drm_devid(fd); >> - unsigned int ip_ver = intel_graphics_ver_legacy(devid); >> + uint16_t devid; >> + unsigned int ip_ver = intel_graphics_ver(fd); >> >> /* >> * Gen >= 12 onwards don't have a setting for PTE, >> @@ -42,26 +42,29 @@ static void get_mocs_index(int fd, struct drm_intel_mocs_index *mocs) >> mocs->wb_index = 4; >> mocs->displayable_index = 1; >> mocs->defer_to_pat_index = 0; >> - } else if (IS_METEORLAKE(devid)) { >> - mocs->uc_index = 5; >> - mocs->wb_index = 1; >> - mocs->displayable_index = 14; >> - } else if (IS_DG2(devid)) { >> - mocs->uc_index = 1; >> - mocs->wb_index = 3; >> - mocs->displayable_index = 3; >> - } else if (IS_DG1(devid)) { >> - mocs->uc_index = 1; >> - mocs->wb_index = 5; >> - mocs->displayable_index = 5; >> - } else if (ip_ver >= IP_VER(12, 0)) { >> - mocs->uc_index = 3; >> - mocs->wb_index = 2; >> - mocs->displayable_index = 61; >> } else { >> - mocs->uc_index = I915_MOCS_PTE; >> - mocs->wb_index = I915_MOCS_CACHED; >> - mocs->displayable_index = I915_MOCS_PTE; >> + devid = intel_get_drm_devid(fd); >> + if (IS_METEORLAKE(devid)) { >> + mocs->uc_index = 5; >> + mocs->wb_index = 1; >> + mocs->displayable_index = 14; >> + } else if (IS_DG2(devid)) { >> + mocs->uc_index = 1; >> + mocs->wb_index = 3; >> + mocs->displayable_index = 3; >> + } else if (IS_DG1(devid)) { >> + mocs->uc_index = 1; >> + mocs->wb_index = 5; >> + mocs->displayable_index = 5; >> + } else if (ip_ver >= IP_VER(12, 0)) { >> + mocs->uc_index = 3; >> + mocs->wb_index = 2; >> + mocs->displayable_index = 61; >> + } else { >> + mocs->uc_index = I915_MOCS_PTE; >> + mocs->wb_index = I915_MOCS_CACHED; >> + mocs->displayable_index = I915_MOCS_PTE; >> + } >> } >> } >> >> @@ -124,9 +127,8 @@ uint8_t intel_get_displayable_mocs_index(int fd) >> uint8_t intel_get_defer_to_pat_mocs_index(int fd) >> { >> struct drm_intel_mocs_index mocs; >> - uint16_t dev_id = intel_get_drm_devid(fd); >> >> - igt_assert(intel_gen_legacy(dev_id) >= 20); >> + igt_assert(intel_gen(fd) >= 20); >> >> get_mocs_index(fd, &mocs); >> >> diff --git a/lib/intel_pat.c b/lib/intel_pat.c >> index 9a61c2a45..9bb4800b6 100644 >> --- a/lib/intel_pat.c >> +++ b/lib/intel_pat.c >> @@ -96,14 +96,12 @@ int32_t xe_get_pat_sw_config(int drm_fd, struct intel_pat_cache *xe_pat_cache) >> >> static void intel_get_pat_idx(int fd, struct intel_pat_cache *pat) >> { >> - uint16_t dev_id = intel_get_drm_devid(fd); >> - >> - if (intel_graphics_ver_legacy(dev_id) == IP_VER(35, 11)) { >> + if (intel_graphics_ver(fd) == IP_VER(35, 11)) { >> pat->uc = 3; >> pat->wb = 2; >> pat->max_index = 31; >> - } else if (intel_get_device_info(dev_id)->graphics_ver == 30 || >> - intel_get_device_info(dev_id)->graphics_ver == 20) { >> + } else if (intel_gen(fd) == 30 || >> + intel_gen(fd) == 20) { >> pat->uc = 3; >> pat->wt = 15; /* Compressed + WB-transient */ >> pat->wb = 2; >> @@ -111,19 +109,19 @@ static void intel_get_pat_idx(int fd, struct intel_pat_cache *pat) >> pat->max_index = 31; >> >> /* Wa_16023588340: CLOS3 entries at end of table are unusable */ >> - if (intel_graphics_ver_legacy(dev_id) == IP_VER(20, 1)) >> + if (intel_graphics_ver(fd) == IP_VER(20, 1)) >> pat->max_index -= 4; >> - } else if (IS_METEORLAKE(dev_id)) { >> + } else if (IS_METEORLAKE(intel_get_drm_devid(fd))) { >> pat->uc = 2; >> pat->wt = 1; >> pat->wb = 3; >> pat->max_index = 3; >> - } else if (IS_PONTEVECCHIO(dev_id)) { >> + } else if (IS_PONTEVECCHIO(intel_get_drm_devid(fd))) { >> pat->uc = 0; >> pat->wt = 2; >> pat->wb = 3; >> pat->max_index = 7; >> - } else if (intel_graphics_ver_legacy(dev_id) <= IP_VER(12, 60)) { >> + } else if (intel_graphics_ver(fd) <= IP_VER(12, 60)) { >> pat->uc = 3; >> pat->wt = 2; >> pat->wb = 0; >> @@ -152,9 +150,8 @@ uint8_t intel_get_pat_idx_uc(int fd) >> uint8_t intel_get_pat_idx_uc_comp(int fd) >> { >> struct intel_pat_cache pat = {}; >> - uint16_t dev_id = intel_get_drm_devid(fd); >> >> - igt_assert(intel_gen_legacy(dev_id) >= 20); >> + igt_assert(intel_gen(fd) >= 20); >> >> intel_get_pat_idx(fd, &pat); >> return pat.uc_comp; >> diff --git a/lib/rendercopy_gen9.c b/lib/rendercopy_gen9.c >> index 66415212c..0be557a47 100644 >> --- a/lib/rendercopy_gen9.c >> +++ b/lib/rendercopy_gen9.c >> @@ -256,12 +256,12 @@ gen9_bind_buf(struct intel_bb *ibb, const struct intel_buf *buf, int is_dst, >> if (buf->compression == I915_COMPRESSION_MEDIA) >> ss->ss7.tgl.media_compression = 1; >> else if (buf->compression == I915_COMPRESSION_RENDER) { >> - if (intel_gen_legacy(ibb->devid) >= 20) >> + if (intel_gen(ibb->fd) >= 20) >> ss->ss6.aux_mode = 0x0; /* AUX_NONE, unified compression */ >> else >> ss->ss6.aux_mode = 0x5; /* AUX_CCS_E */ >> >> - if (intel_gen_legacy(ibb->devid) < 12 && buf->ccs[0].stride) { >> + if (intel_gen(ibb->fd) < 12 && buf->ccs[0].stride) { >> ss->ss6.aux_pitch = (buf->ccs[0].stride / 128) - 1; >> >> address = intel_bb_offset_reloc_with_delta(ibb, buf->handle, >> @@ -303,7 +303,7 @@ gen9_bind_buf(struct intel_bb *ibb, const struct intel_buf *buf, int is_dst, >> ss->ss7.dg2.disable_support_for_multi_gpu_partial_writes = 1; >> ss->ss7.dg2.disable_support_for_multi_gpu_atomics = 1; >> >> - if (intel_gen_legacy(ibb->devid) >= 20) >> + if (intel_gen(ibb->fd) >= 20) >> ss->ss12.lnl.compression_format = lnl_compression_format(buf); >> else >> ss->ss12.dg2.compression_format = dg2_compression_format(buf); >> @@ -681,7 +681,7 @@ gen9_emit_state_base_address(struct intel_bb *ibb) { >> /* WaBindlessSurfaceStateModifyEnable:skl,bxt */ >> /* The length has to be one less if we dont modify >> bindless state */ >> - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) >> + if (intel_gen(ibb->fd) >= 20) >> intel_bb_out(ibb, GEN4_STATE_BASE_ADDRESS | 20); >> else >> intel_bb_out(ibb, GEN4_STATE_BASE_ADDRESS | (19 - 1 - 2)); >> @@ -726,7 +726,7 @@ gen9_emit_state_base_address(struct intel_bb *ibb) { >> intel_bb_out(ibb, 0); >> intel_bb_out(ibb, 0); >> >> - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) { >> + if (intel_gen(ibb->fd) >= 20) { >> /* Bindless sampler */ >> intel_bb_out(ibb, 0); >> intel_bb_out(ibb, 0); >> @@ -899,7 +899,7 @@ gen9_emit_ds(struct intel_bb *ibb) { >> >> static void >> gen8_emit_wm_hz_op(struct intel_bb *ibb) { >> - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) { >> + if (intel_gen(ibb->fd) >= 20) { >> intel_bb_out(ibb, GEN8_3DSTATE_WM_HZ_OP | (6-2)); >> intel_bb_out(ibb, 0); >> } else { >> @@ -989,7 +989,7 @@ gen8_emit_ps(struct intel_bb *ibb, uint32_t kernel, bool fast_clear) { >> intel_bb_out(ibb, 0); >> >> intel_bb_out(ibb, GEN7_3DSTATE_PS | (12-2)); >> - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) >> + if (intel_gen(ibb->fd) >= 20) >> intel_bb_out(ibb, kernel | 1); >> else >> intel_bb_out(ibb, kernel); >> @@ -1006,7 +1006,7 @@ gen8_emit_ps(struct intel_bb *ibb, uint32_t kernel, bool fast_clear) { >> intel_bb_out(ibb, (max_threads - 1) << GEN8_3DSTATE_PS_MAX_THREADS_SHIFT | >> GEN6_3DSTATE_WM_16_DISPATCH_ENABLE | >> (fast_clear ? GEN8_3DSTATE_FAST_CLEAR_ENABLE : 0)); >> - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) >> + if (intel_gen(ibb->fd) >= 20) >> intel_bb_out(ibb, 6 << GEN6_3DSTATE_WM_DISPATCH_START_GRF_0_SHIFT | >> GENXE_KERNEL0_POLY_PACK16_FIXED << GENXE_KERNEL0_PACKING_POLICY); >> else >> @@ -1061,7 +1061,7 @@ gen9_emit_depth(struct intel_bb *ibb) >> >> static void >> gen7_emit_clear(struct intel_bb *ibb) { >> - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) >> + if (intel_gen(ibb->fd) >= 20) >> return; >> >> intel_bb_out(ibb, GEN7_3DSTATE_CLEAR_PARAMS | (3-2)); >> @@ -1072,7 +1072,7 @@ gen7_emit_clear(struct intel_bb *ibb) { >> static void >> gen6_emit_drawing_rectangle(struct intel_bb *ibb, const struct intel_buf *dst) >> { >> - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) >> + if (intel_gen(ibb->fd) >= 20) >> intel_bb_out(ibb, GENXE2_3DSTATE_DRAWING_RECTANGLE_FAST | (4 - 2)); >> else >> intel_bb_out(ibb, GEN4_3DSTATE_DRAWING_RECTANGLE | (4 - 2)); >> @@ -1266,7 +1266,7 @@ void _gen9_render_op(struct intel_bb *ibb, >> >> gen9_emit_state_base_address(ibb); >> >> - if (HAS_4TILE(ibb->devid) || intel_gen_legacy(ibb->devid) > 12) { >> + if (HAS_4TILE(ibb->devid) || intel_gen(ibb->fd) > 12) { >> intel_bb_out(ibb, GEN4_3DSTATE_BINDING_TABLE_POOL_ALLOC | 2); >> intel_bb_emit_reloc(ibb, ibb->handle, >> I915_GEM_DOMAIN_RENDER | I915_GEM_DOMAIN_INSTRUCTION, 0, >> diff --git a/lib/xe/xe_legacy.c b/lib/xe/xe_legacy.c >> index 1529ed1cc..c1ce9fa00 100644 >> --- a/lib/xe/xe_legacy.c >> +++ b/lib/xe/xe_legacy.c >> @@ -75,7 +75,7 @@ xe_legacy_test_mode(int fd, struct drm_xe_engine_class_instance *eci, >> igt_assert_lte(n_exec_queues, MAX_N_EXECQUEUES); >> >> if (flags & COMPRESSION) >> - igt_require(intel_gen_legacy(intel_get_drm_devid(fd)) >= 20); >> + igt_require(intel_gen(fd) >= 20); >> >> if (flags & CLOSE_FD) >> fd = drm_open_driver(DRIVER_XE); >> diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c >> index 36260e3e5..8ca137381 100644 >> --- a/lib/xe/xe_spin.c >> +++ b/lib/xe/xe_spin.c >> @@ -54,7 +54,6 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts) >> uint64_t pad_addr = opts->addr + offsetof(struct xe_spin, pad); >> uint64_t timestamp_addr = opts->addr + offsetof(struct xe_spin, timestamp); >> int b = 0; >> - uint32_t devid; >> >> spin->start = 0; >> spin->end = 0xffffffff; >> @@ -166,8 +165,7 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts) >> spin->batch[b++] = opts->mem_copy->dst_offset; >> spin->batch[b++] = opts->mem_copy->dst_offset << 32; >> >> - devid = intel_get_drm_devid(opts->mem_copy->fd); >> - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) >> + if (intel_graphics_ver(opts->mem_copy->fd) >= IP_VER(20, 0)) >> spin->batch[b++] = opts->mem_copy->src->mocs_index << XE2_MEM_COPY_SRC_MOCS_SHIFT | >> opts->mem_copy->dst->mocs_index << XE2_MEM_COPY_DST_MOCS_SHIFT; >> else >> diff --git a/lib/xe/xe_sriov_provisioning.c b/lib/xe/xe_sriov_provisioning.c >> index 7b60ccd6c..3d981766c 100644 >> --- a/lib/xe/xe_sriov_provisioning.c >> +++ b/lib/xe/xe_sriov_provisioning.c >> @@ -50,9 +50,7 @@ const char *xe_sriov_shared_res_to_string(enum xe_sriov_shared_res res) >> >> static uint64_t get_vfid_mask(int fd) >> { >> - uint16_t dev_id = intel_get_drm_devid(fd); >> - >> - return (intel_graphics_ver_legacy(dev_id) >= IP_VER(12, 50)) ? >> + return (intel_graphics_ver(fd) >= IP_VER(12, 50)) ? >> GGTT_PTE_VFID_MASK : PRE_1250_IP_VER_GGTT_PTE_VFID_MASK; >> } >> >> diff --git a/tests/intel/api_intel_allocator.c b/tests/intel/api_intel_allocator.c >> index 869e5e9a0..6b1d17da7 100644 >> --- a/tests/intel/api_intel_allocator.c >> +++ b/tests/intel/api_intel_allocator.c >> @@ -625,7 +625,7 @@ static void execbuf_with_allocator(int fd) >> uint64_t ahnd, sz = 4096, gtt_size; >> unsigned int flags = EXEC_OBJECT_PINNED; >> uint32_t *ptr, batch[32], copied; >> - int gen = intel_gen_legacy(intel_get_drm_devid(fd)); >> + int gen = intel_gen(fd); >> int i; >> const uint32_t magic = 0x900df00d; >> >> diff --git a/tests/intel/kms_ccs.c b/tests/intel/kms_ccs.c >> index 30f2c9465..a0373316a 100644 >> --- a/tests/intel/kms_ccs.c >> +++ b/tests/intel/kms_ccs.c >> @@ -565,7 +565,7 @@ static void access_flat_ccs_surface(struct igt_fb *fb, bool verify_compression) >> uint16_t cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; >> uint8_t uc_mocs = intel_get_uc_mocs_index(fb->fd); >> uint8_t comp_pat_index = intel_get_pat_idx_wt(fb->fd); >> - uint32_t region = (intel_gen_legacy(intel_get_drm_devid(fb->fd)) >= 20 && >> + uint32_t region = (intel_gen(fb->fd) >= 20 && >> xe_has_vram(fb->fd)) ? REGION_LMEM(0) : REGION_SMEM; >> >> struct drm_xe_engine_class_instance inst = { >> @@ -645,7 +645,7 @@ static void fill_fb_random(int drm_fd, igt_fb_t *fb) >> igt_assert_eq(0, gem_munmap(map, fb->size)); >> >> /* randomize also ccs surface on Xe2 */ >> - if (intel_gen_legacy(intel_get_drm_devid(drm_fd)) >= 20) >> + if (intel_gen(drm_fd) >= 20) >> access_flat_ccs_surface(fb, false); >> } >> >> @@ -1125,11 +1125,6 @@ static bool valid_modifier_test(u64 modifier, const enum test_flags flags) >> >> static void test_output(data_t *data, const int testnum) >> { >> - uint16_t dev_id; >> - >> - igt_fixture() >> - dev_id = intel_get_drm_devid(data->drm_fd); >> - >> data->flags = tests[testnum].flags; >> >> for (int i = 0; i < ARRAY_SIZE(ccs_modifiers); i++) { >> @@ -1143,10 +1138,10 @@ static void test_output(data_t *data, const int testnum) >> igt_subtest_with_dynamic_f("%s-%s", tests[testnum].testname, ccs_modifiers[i].str) { >> if (ccs_modifiers[i].modifier == I915_FORMAT_MOD_4_TILED_BMG_CCS || >> ccs_modifiers[i].modifier == I915_FORMAT_MOD_4_TILED_LNL_CCS) { >> - igt_require_f(intel_gen_legacy(dev_id) >= 20, >> + igt_require_f(intel_gen(data->drm_fd) >= 20, >> "Xe2 platform needed.\n"); >> } else { >> - igt_require_f(intel_gen_legacy(dev_id) < 20, >> + igt_require_f(intel_gen(data->drm_fd) < 20, >> "Older than Xe2 platform needed.\n"); >> } >> >> diff --git a/tests/intel/kms_fbcon_fbt.c b/tests/intel/kms_fbcon_fbt.c >> index edf5c0d1b..b28961417 100644 >> --- a/tests/intel/kms_fbcon_fbt.c >> +++ b/tests/intel/kms_fbcon_fbt.c >> @@ -179,7 +179,7 @@ static bool fbc_wait_until_update(struct drm_info *drm) >> * For older GENs FBC is still expected to be disabled as it still >> * relies on a tiled and fenceable framebuffer to track modifications. >> */ >> - if (intel_gen_legacy(intel_get_drm_devid(drm->fd)) >= 9) { >> + if (intel_gen(drm->fd) >= 9) { >> if (!fbc_wait_until_enabled(drm->debugfs_fd)) >> return false; >> /* >> diff --git a/tests/intel/kms_frontbuffer_tracking.c b/tests/intel/kms_frontbuffer_tracking.c >> index c8c2ce240..5b60587db 100644 >> --- a/tests/intel/kms_frontbuffer_tracking.c >> +++ b/tests/intel/kms_frontbuffer_tracking.c >> @@ -3062,13 +3062,13 @@ static bool tiling_is_valid(int feature_flags, enum tiling_type tiling) >> >> switch (tiling) { >> case TILING_LINEAR: >> - return intel_gen_legacy(drm.devid) >= 9; >> + return intel_gen(drm.fd) >= 9; >> case TILING_X: >> return (intel_get_device_info(drm.devid)->display_ver > 29) ? false : true; >> case TILING_Y: >> return true; >> case TILING_4: >> - return intel_gen_legacy(drm.devid) >= 12; >> + return intel_gen(drm.fd) >= 12; >> default: >> igt_assert(false); >> return false; >> @@ -4475,7 +4475,7 @@ int igt_main_args("", long_options, help_str, opt_handler, NULL) >> igt_require(igt_draw_supports_method(drm.fd, t.method)); >> >> if (t.tiling == TILING_Y) { >> - igt_require(intel_gen_legacy(drm.devid) >= 9); >> + igt_require(intel_gen(drm.fd) >= 9); >> igt_require(!intel_get_device_info(drm.devid)->has_4tile); >> } >> >> diff --git a/tests/intel/kms_pipe_stress.c b/tests/intel/kms_pipe_stress.c >> index 1ae32d5fd..f8c994d07 100644 >> --- a/tests/intel/kms_pipe_stress.c >> +++ b/tests/intel/kms_pipe_stress.c >> @@ -822,7 +822,7 @@ static void prepare_test(struct data *data) >> >> create_framebuffers(data); >> >> - if (intel_gen_legacy(intel_get_drm_devid(data->drm_fd)) > 9) >> + if (intel_gen(data->drm_fd) > 9) >> start_gpu_threads(data); >> } >> >> @@ -830,7 +830,7 @@ static void finish_test(struct data *data) >> { >> int i; >> >> - if (intel_gen_legacy(intel_get_drm_devid(data->drm_fd)) > 9) >> + if (intel_gen(data->drm_fd) > 9) >> stop_gpu_threads(data); >> >> /* >> diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c >> index 914144270..0ba8ae48c 100644 >> --- a/tests/intel/xe_ccs.c >> +++ b/tests/intel/xe_ccs.c >> @@ -128,7 +128,7 @@ static void surf_copy(int xe, >> int result; >> >> igt_assert(mid->compression); >> - if (intel_gen_legacy(devid) >= 20 && mid->compression) { >> + if (intel_gen(xe) >= 20 && mid->compression) { >> comp_pat_index = intel_get_pat_idx_uc_comp(xe); >> cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; >> } >> @@ -177,7 +177,7 @@ static void surf_copy(int xe, >> if (IS_GEN(devid, 12) && is_intel_dgfx(xe)) { >> igt_assert(!strcmp(orig, newsum)); >> igt_assert(!strcmp(orig2, newsum2)); >> - } else if (intel_gen_legacy(devid) >= 20) { >> + } else if (intel_gen(xe) >= 20) { >> if (is_intel_dgfx(xe)) { >> /* buffer object would become >> * uncompressed in xe2+ dgfx >> @@ -227,7 +227,7 @@ static void surf_copy(int xe, >> * uncompressed in xe2+ dgfx, and therefore retrieve the >> * ccs by copying 0 to ccsmap >> */ >> - if (suspend_resume && intel_gen_legacy(devid) >= 20 && is_intel_dgfx(xe)) >> + if (suspend_resume && intel_gen(xe) >= 20 && is_intel_dgfx(xe)) >> memset(ccsmap, 0, ccssize); >> else >> /* retrieve back ccs */ >> @@ -353,7 +353,7 @@ static void block_copy(int xe, >> uint64_t bb_size = xe_bb_size(xe, SZ_4K); >> uint64_t ahnd = intel_allocator_open(xe, ctx->vm, INTEL_ALLOCATOR_RELOC); >> uint32_t run_id = mid_tiling; >> - uint32_t mid_region = (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && >> + uint32_t mid_region = (intel_gen(xe) >= 20 && >> !xe_has_vram(xe)) ? region1 : region2; >> uint32_t bb; >> enum blt_compression mid_compression = config->compression; >> @@ -441,7 +441,7 @@ static void block_copy(int xe, >> if (config->inplace) { >> uint8_t pat_index = DEFAULT_PAT_INDEX; >> >> - if (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && config->compression) >> + if (intel_gen(xe) >= 20 && config->compression) >> pat_index = intel_get_pat_idx_uc_comp(xe); >> >> blt_set_object(&blt.dst, mid->handle, dst->size, mid->region, 0, >> @@ -488,7 +488,7 @@ static void block_multicopy(int xe, >> uint64_t bb_size = xe_bb_size(xe, SZ_4K); >> uint64_t ahnd = intel_allocator_open(xe, ctx->vm, INTEL_ALLOCATOR_RELOC); >> uint32_t run_id = mid_tiling; >> - uint32_t mid_region = (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && >> + uint32_t mid_region = (intel_gen(xe) >= 20 && >> !xe_has_vram(xe)) ? region1 : region2; >> uint32_t bb; >> enum blt_compression mid_compression = config->compression; >> @@ -530,7 +530,7 @@ static void block_multicopy(int xe, >> if (config->inplace) { >> uint8_t pat_index = DEFAULT_PAT_INDEX; >> >> - if (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && config->compression) >> + if (intel_gen(xe) >= 20 && config->compression) >> pat_index = intel_get_pat_idx_uc_comp(xe); >> >> blt_set_object(&blt3.dst, mid->handle, dst->size, mid->region, >> @@ -715,7 +715,7 @@ static void block_copy_test(int xe, >> int tiling, width, height; >> >> >> - if (intel_gen_legacy(dev_id) >= 20 && config->compression) >> + if (intel_gen(xe) >= 20 && config->compression) >> igt_require(HAS_FLATCCS(dev_id)); >> >> if (config->compression && !blt_block_copy_supports_compression(xe)) >> diff --git a/tests/intel/xe_compute.c b/tests/intel/xe_compute.c >> index 7b6c39c77..1cb86920f 100644 >> --- a/tests/intel/xe_compute.c >> +++ b/tests/intel/xe_compute.c >> @@ -232,7 +232,7 @@ test_compute_kernel_loop(uint64_t loop_duration) >> double elapse_time, lower_bound, upper_bound; >> >> fd = drm_open_driver(DRIVER_XE); >> - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); >> + ip_ver = intel_graphics_ver(fd); >> kernels = intel_compute_square_kernels; >> >> while (kernels->kernel) { >> @@ -335,7 +335,7 @@ igt_check_supported_pipeline(void) >> const struct intel_compute_kernels *kernels; >> >> fd = drm_open_driver(DRIVER_XE); >> - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); >> + ip_ver = intel_graphics_ver(fd); >> kernels = intel_compute_square_kernels; >> drm_close_driver(fd); >> >> @@ -432,7 +432,7 @@ test_eu_busy(uint64_t duration_sec) >> >> fd = drm_open_driver(DRIVER_XE); >> >> - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); >> + ip_ver = intel_graphics_ver(fd); >> kernels = intel_compute_square_kernels; >> while (kernels->kernel) { >> if (ip_ver == kernels->ip_ver) >> @@ -518,7 +518,7 @@ int igt_main() >> igt_fixture() { >> xe = drm_open_driver(DRIVER_XE); >> sriov_enabled = is_sriov_mode(xe); >> - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(xe)); >> + ip_ver = intel_graphics_ver(xe); >> igt_store_ccs_mode(ccs_mode, ARRAY_SIZE(ccs_mode)); >> } >> >> diff --git a/tests/intel/xe_copy_basic.c b/tests/intel/xe_copy_basic.c >> index 55081f938..e37bad746 100644 >> --- a/tests/intel/xe_copy_basic.c >> +++ b/tests/intel/xe_copy_basic.c >> @@ -261,7 +261,6 @@ const char *help_str = >> int igt_main_args("b", NULL, help_str, opt_handler, NULL) >> { >> int fd; >> - uint16_t dev_id; >> struct igt_collection *set, *regions; >> uint32_t region; >> struct rect linear[] = { { 0, 0xfd, 1, MODE_BYTE }, >> @@ -275,7 +274,6 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) >> >> igt_fixture() { >> fd = drm_open_driver(DRIVER_XE); >> - dev_id = intel_get_drm_devid(fd); >> xe_device_get(fd); >> set = xe_get_memory_region_set(fd, >> DRM_XE_MEM_REGION_CLASS_SYSMEM, >> @@ -295,7 +293,7 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) >> for (int i = 0; i < ARRAY_SIZE(page); i++) { >> igt_subtest_f("mem-page-copy-%u", page[i].width) { >> igt_require(blt_has_mem_copy(fd)); >> - igt_require(intel_get_device_info(dev_id)->graphics_ver >= 20); >> + igt_require(intel_gen(fd) >= 20); >> for_each_variation_r(regions, 1, set) { >> region = igt_collection_get_value(regions, 0); >> copy_test(fd, &page[i], MEM_COPY, region); >> @@ -320,7 +318,7 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) >> * till 0x3FFFF. >> */ >> if (linear[i].width > 0x3ffff && >> - (intel_get_device_info(dev_id)->graphics_ver < 20)) >> + (intel_gen(fd) < 20)) >> igt_skip("Skipping: width exceeds 18-bit limit on gfx_ver < 20\n"); >> igt_require(blt_has_mem_set(fd)); >> for_each_variation_r(regions, 1, set) { >> diff --git a/tests/intel/xe_debugfs.c b/tests/intel/xe_debugfs.c >> index facb55854..4075b173a 100644 >> --- a/tests/intel/xe_debugfs.c >> +++ b/tests/intel/xe_debugfs.c >> @@ -296,7 +296,6 @@ static void test_tile_dir(struct xe_device *xe_dev, uint8_t tile) >> */ >> static void test_info_read(struct xe_device *xe_dev) >> { >> - uint16_t devid = intel_get_drm_devid(xe_dev->fd); >> struct drm_xe_query_config *config; >> const char *name = "info"; >> bool failed = false; >> @@ -329,7 +328,7 @@ static void test_info_read(struct xe_device *xe_dev) >> failed = true; >> } >> >> - if (intel_gen_legacy(devid) < 20) { >> + if (intel_gen(xe_dev->fd) < 20) { >> val = -1; >> >> switch (config->info[DRM_XE_QUERY_CONFIG_VA_BITS]) { >> diff --git a/tests/intel/xe_eudebug_online.c b/tests/intel/xe_eudebug_online.c >> index f64b12b3f..961cf5afc 100644 >> --- a/tests/intel/xe_eudebug_online.c >> +++ b/tests/intel/xe_eudebug_online.c >> @@ -400,9 +400,7 @@ static uint64_t eu_ctl(int debugfd, uint64_t client, >> >> static bool intel_gen_needs_resume_wa(int fd) >> { >> - const uint32_t id = intel_get_drm_devid(fd); >> - >> - return intel_gen_legacy(id) == 12 && intel_graphics_ver_legacy(id) < IP_VER(12, 55); >> + return intel_gen(fd) == 12 && intel_graphics_ver(fd) < IP_VER(12, 55); >> } >> >> static uint64_t eu_ctl_resume(int fd, int debugfd, uint64_t client, >> @@ -1222,8 +1220,6 @@ static void run_online_client(struct xe_eudebug_client *c) >> >> static bool intel_gen_has_lockstep_eus(int fd) >> { >> - const uint32_t id = intel_get_drm_devid(fd); >> - >> /* >> * Lockstep (or in some parlance, fused) EUs are pair of EUs >> * that work in sync, supposedly same clock and same control flow. >> @@ -1231,7 +1227,7 @@ static bool intel_gen_has_lockstep_eus(int fd) >> * excepted into SIP. In this level, the hardware has only one attention >> * thread bit for units. PVC is the first one without lockstepping. >> */ >> - return !(intel_graphics_ver_legacy(id) == IP_VER(12, 60) || intel_gen_legacy(id) >= 20); >> + return !(intel_graphics_ver(fd) == IP_VER(12, 60) || intel_gen(fd) >= 20); >> } >> >> static int query_attention_bitmask_size(int fd, int gt) >> diff --git a/tests/intel/xe_exec_multi_queue.c b/tests/intel/xe_exec_multi_queue.c >> index 1d416efc9..bf09efcc3 100644 >> --- a/tests/intel/xe_exec_multi_queue.c >> +++ b/tests/intel/xe_exec_multi_queue.c >> @@ -1047,7 +1047,7 @@ int igt_main() >> >> igt_fixture() { >> fd = drm_open_driver(DRIVER_XE); >> - igt_require(intel_graphics_ver_legacy(intel_get_drm_devid(fd)) >= IP_VER(35, 0)); >> + igt_require(intel_graphics_ver(fd) >= IP_VER(35, 0)); >> } >> >> igt_subtest_f("sanity") >> diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c >> index 498ab42b7..9e6a96aa8 100644 >> --- a/tests/intel/xe_exec_store.c >> +++ b/tests/intel/xe_exec_store.c >> @@ -55,8 +55,7 @@ static void store_dword_batch(struct data *data, uint64_t addr, int value) >> data->addr = batch_addr; >> } >> >> -static void cond_batch(struct data *data, uint64_t addr, int value, >> - uint16_t dev_id) >> +static void cond_batch(int fd, struct data *data, uint64_t addr, int value) >> { >> int b; >> uint64_t batch_offset = (char *)&(data->batch) - (char *)data; >> @@ -69,7 +68,7 @@ static void cond_batch(struct data *data, uint64_t addr, int value, >> data->batch[b++] = sdi_addr; >> data->batch[b++] = sdi_addr >> 32; >> >> - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) >> + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) >> data->batch[b++] = MI_MEM_FENCE | MI_WRITE_FENCE; >> >> data->batch[b++] = MI_CONDITIONAL_BATCH_BUFFER_END | MI_DO_COMPARE | 5 << 12 | 2; >> @@ -112,8 +111,7 @@ static void persistance_batch(struct data *data, uint64_t addr) >> * SUBTEST: basic-all >> * Description: Test to verify store dword on all available engines. >> */ >> -static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci, >> - uint16_t dev_id) >> +static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci) >> { >> struct drm_xe_sync sync[2] = { >> { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, }, >> @@ -156,7 +154,7 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc >> else if (inst_type == COND_BATCH) { >> /* A random value where it stops at the below value. */ >> value = 20 + random() % 10; >> - cond_batch(data, addr, value, dev_id); >> + cond_batch(fd, data, addr, value); >> } >> else >> igt_assert_f(inst_type < 2, "Entered wrong inst_type.\n"); >> @@ -416,23 +414,21 @@ int igt_main() >> { >> struct drm_xe_engine_class_instance *hwe; >> int fd; >> - uint16_t dev_id; >> struct drm_xe_engine *engine; >> >> igt_fixture() { >> fd = drm_open_driver(DRIVER_XE); >> xe_device_get(fd); >> - dev_id = intel_get_drm_devid(fd); >> } >> >> igt_subtest("basic-store") { >> engine = xe_engine(fd, 1); >> - basic_inst(fd, STORE, &engine->instance, dev_id); >> + basic_inst(fd, STORE, &engine->instance); >> } >> >> igt_subtest("basic-cond-batch") { >> engine = xe_engine(fd, 1); >> - basic_inst(fd, COND_BATCH, &engine->instance, dev_id); >> + basic_inst(fd, COND_BATCH, &engine->instance); >> } >> >> igt_subtest_with_dynamic("basic-all") { >> @@ -441,7 +437,7 @@ int igt_main() >> xe_engine_class_string(hwe->engine_class), >> hwe->engine_instance, >> hwe->gt_id); >> - basic_inst(fd, STORE, hwe, dev_id); >> + basic_inst(fd, STORE, hwe); >> } >> } >> >> diff --git a/tests/intel/xe_fault_injection.c b/tests/intel/xe_fault_injection.c >> index 8adc5c15a..57c5a5579 100644 >> --- a/tests/intel/xe_fault_injection.c >> +++ b/tests/intel/xe_fault_injection.c >> @@ -486,12 +486,12 @@ vm_bind_fail(int fd, const char pci_slot[], const char function_name[]) >> * @xe_oa_alloc_regs: xe_oa_alloc_regs >> */ >> static void >> -oa_add_config_fail(int fd, int sysfs, int devid, >> +oa_add_config_fail(int fd, int sysfs, >> const char pci_slot[], const char function_name[]) >> { >> char path[512]; >> uint64_t config_id; >> -#define SAMPLE_MUX_REG (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0) ? \ >> +#define SAMPLE_MUX_REG (intel_graphics_ver(fd) >= IP_VER(20, 0) ? \ >> 0x13000 /* PES* */ : 0x9888 /* NOA_WRITE */) >> >> uint32_t mux_regs[] = { SAMPLE_MUX_REG, 0x0 }; >> @@ -557,7 +557,6 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) >> int fd, sysfs; >> struct drm_xe_engine_class_instance *hwe; >> struct fault_injection_params fault_params; >> - static uint32_t devid; >> char pci_slot[NAME_MAX]; >> bool is_vf_device; >> const struct section { >> @@ -627,7 +626,6 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) >> igt_fixture() { >> igt_require(fail_function_injection_enabled()); >> fd = drm_open_driver(DRIVER_XE); >> - devid = intel_get_drm_devid(fd); >> sysfs = igt_sysfs_open(fd); >> igt_device_get_pci_slot_name(fd, pci_slot); >> setup_injection_fault(&default_fault_params); >> @@ -659,7 +657,7 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) >> >> for (const struct section *s = oa_add_config_fail_functions; s->name; s++) >> igt_subtest_f("oa-add-config-fail-%s", s->name) >> - oa_add_config_fail(fd, sysfs, devid, pci_slot, s->name); >> + oa_add_config_fail(fd, sysfs, pci_slot, s->name); >> >> igt_fixture() { >> igt_kmod_unbind("xe", pci_slot); >> diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c >> index 5c112351f..e37d00d2c 100644 >> --- a/tests/intel/xe_intel_bb.c >> +++ b/tests/intel/xe_intel_bb.c >> @@ -710,7 +710,7 @@ static void do_intel_bb_blit(struct buf_ops *bops, int loops, uint32_t tiling) >> int i, fails = 0, xe = buf_ops_get_fd(bops); >> >> /* We'll fix it for gen2/3 later. */ >> - igt_require(intel_gen_legacy(intel_get_drm_devid(xe)) > 3); >> + igt_require(intel_gen(xe) > 3); >> >> for (i = 0; i < loops; i++) >> fails += __do_intel_bb_blit(bops, tiling); >> @@ -878,10 +878,9 @@ static int render(struct buf_ops *bops, uint32_t tiling, >> int xe = buf_ops_get_fd(bops); >> uint32_t fails = 0; >> char name[128]; >> - uint32_t devid = intel_get_drm_devid(xe); >> igt_render_copyfunc_t render_copy = NULL; >> >> - igt_debug("%s() gen: %d\n", __func__, intel_gen_legacy(devid)); >> + igt_debug("%s() gen: %d\n", __func__, intel_gen(xe)); >> >> ibb = intel_bb_create(xe, PAGE_SIZE); >> >> @@ -1041,7 +1040,7 @@ int igt_main_args("dpib", NULL, help_str, opt_handler, NULL) >> do_intel_bb_blit(bops, 3, I915_TILING_X); >> >> igt_subtest("intel-bb-blit-y") { >> - igt_require(intel_gen_legacy(intel_get_drm_devid(xe)) >= 6); >> + igt_require(intel_gen(xe) >= 6); >> do_intel_bb_blit(bops, 3, I915_TILING_Y); >> } >> >> diff --git a/tests/intel/xe_multigpu_svm.c b/tests/intel/xe_multigpu_svm.c >> index ab800476e..2c6f81a10 100644 >> --- a/tests/intel/xe_multigpu_svm.c >> +++ b/tests/intel/xe_multigpu_svm.c >> @@ -396,7 +396,6 @@ static void batch_init(int fd, uint32_t vm, uint64_t src_addr, >> uint64_t batch_addr; >> void *batch; >> uint32_t *cmd; >> - uint16_t dev_id = intel_get_drm_devid(fd); >> uint32_t mocs_index = intel_get_uc_mocs_index(fd); >> int i = 0; >> >> @@ -412,7 +411,7 @@ static void batch_init(int fd, uint32_t vm, uint64_t src_addr, >> cmd[i++] = upper_32_bits(src_addr); >> cmd[i++] = lower_32_bits(dst_addr); >> cmd[i++] = upper_32_bits(dst_addr); >> - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) { >> + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { >> cmd[i++] = mocs_index << XE2_MEM_COPY_SRC_MOCS_SHIFT | mocs_index; >> } else { >> cmd[i++] = mocs_index << GEN12_MEM_COPY_MOCS_SHIFT | mocs_index; >> diff --git a/tests/intel/xe_pat.c b/tests/intel/xe_pat.c >> index 96302ad3a..96d544160 100644 >> --- a/tests/intel/xe_pat.c >> +++ b/tests/intel/xe_pat.c >> @@ -119,14 +119,13 @@ static int xe_fetch_pat_sw_config(int fd, struct intel_pat_cache *pat_sw_config) >> */ >> static void pat_sanity(int fd) >> { >> - uint16_t dev_id = intel_get_drm_devid(fd); >> struct intel_pat_cache pat_sw_config = {}; >> int32_t parsed; >> bool has_uc_comp = false, has_wt = false; >> >> parsed = xe_fetch_pat_sw_config(fd, &pat_sw_config); >> >> - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) { >> + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { >> for (int i = 0; i < parsed; i++) { >> uint32_t pat = pat_sw_config.entries[i].pat; >> if (pat_sw_config.entries[i].rsvd) >> @@ -898,7 +897,6 @@ static void display_vs_wb_transient(int fd) >> 3, /* UC (baseline) */ >> 6, /* L3:XD (uncompressed) */ >> }; >> - uint32_t devid = intel_get_drm_devid(fd); >> igt_render_copyfunc_t render_copy = NULL; >> igt_crc_t ref_crc = {}, crc = {}; >> igt_plane_t *primary; >> @@ -914,7 +912,7 @@ static void display_vs_wb_transient(int fd) >> int bpp = 32; >> int i; >> >> - igt_require(intel_get_device_info(devid)->graphics_ver >= 20); >> + igt_require(intel_gen(fd) >= 20); >> >> render_copy = igt_get_render_copyfunc(fd); >> igt_require(render_copy); >> @@ -1015,10 +1013,8 @@ static uint8_t get_pat_idx_uc(int fd, bool *compressed) >> >> static uint8_t get_pat_idx_wt(int fd, bool *compressed) >> { >> - uint16_t dev_id = intel_get_drm_devid(fd); >> - >> if (compressed) >> - *compressed = intel_get_device_info(dev_id)->graphics_ver >= 20; >> + *compressed = intel_gen(fd) >= 20; >> >> return intel_get_pat_idx_wt(fd); >> } >> @@ -1328,7 +1324,7 @@ int igt_main_args("V", NULL, help_str, opt_handler, NULL) >> bo_comp_disable_bind(fd); >> >> igt_subtest_with_dynamic("pat-index-xelp") { >> - igt_require(intel_graphics_ver_legacy(dev_id) <= IP_VER(12, 55)); >> + igt_require(intel_graphics_ver(fd) <= IP_VER(12, 55)); >> subtest_pat_index_modes_with_regions(fd, xelp_pat_index_modes, >> ARRAY_SIZE(xelp_pat_index_modes)); >> } >> @@ -1346,10 +1342,10 @@ int igt_main_args("V", NULL, help_str, opt_handler, NULL) >> } >> >> igt_subtest_with_dynamic("pat-index-xe2") { >> - igt_require(intel_get_device_info(dev_id)->graphics_ver >= 20); >> + igt_require(intel_gen(fd) >= 20); >> igt_assert(HAS_FLATCCS(dev_id)); >> >> - if (intel_graphics_ver_legacy(dev_id) == IP_VER(20, 1)) >> + if (intel_graphics_ver(fd) == IP_VER(20, 1)) >> subtest_pat_index_modes_with_regions(fd, bmg_g21_pat_index_modes, >> ARRAY_SIZE(bmg_g21_pat_index_modes)); >> else >> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c >> index 318a9994a..ae505a5d7 100644 >> --- a/tests/intel/xe_query.c >> +++ b/tests/intel/xe_query.c >> @@ -380,7 +380,7 @@ test_query_gt_topology(int fd) >> } >> >> /* sanity check EU type */ >> - if (IS_PONTEVECCHIO(dev_id) || intel_gen_legacy(dev_id) >= 20) { >> + if (IS_PONTEVECCHIO(dev_id) || intel_gen(fd) >= 20) { >> igt_assert(topo_types & (1 << DRM_XE_TOPO_SIMD16_EU_PER_DSS)); >> igt_assert_eq(topo_types & (1 << DRM_XE_TOPO_EU_PER_DSS), 0); >> } else { >> @@ -428,7 +428,7 @@ test_query_gt_topology_l3_bank_mask(int fd) >> } >> >> igt_info(" count: %d\n", count); >> - if (intel_get_device_info(dev_id)->graphics_ver < 20) { >> + if (intel_gen(fd) < 20) { >> igt_assert_lt(0, count); >> } >> >> diff --git a/tests/intel/xe_render_copy.c b/tests/intel/xe_render_copy.c >> index 0a6ae9ca2..a3976b5f1 100644 >> --- a/tests/intel/xe_render_copy.c >> +++ b/tests/intel/xe_render_copy.c >> @@ -136,7 +136,7 @@ static int compare_bufs(struct intel_buf *buf1, struct intel_buf *buf2, >> static bool buf_is_aux_compressed(struct buf_ops *bops, struct intel_buf *buf) >> { >> int xe = buf_ops_get_fd(bops); >> - unsigned int gen = intel_gen_legacy(buf_ops_get_devid(bops)); >> + unsigned int gen = intel_gen(buf_ops_get_fd(bops)); >> uint32_t ccs_size; >> uint8_t *ptr; >> bool is_compressed = false; >> -- >> 2.43.0 >> ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 3/3] intel/xe: use fd-based graphics/IP version helpers 2026-02-25 8:51 ` Wang, X @ 2026-02-25 23:18 ` Matt Roper 0 siblings, 0 replies; 12+ messages in thread From: Matt Roper @ 2026-02-25 23:18 UTC (permalink / raw) To: Wang, X; +Cc: igt-dev, Kamil Konieczny, Zbigniew Kempczyński, Ravi Kumar V On Wed, Feb 25, 2026 at 12:51:51AM -0800, Wang, X wrote: > > On 2/4/2026 10:56, Matt Roper wrote: > > On Thu, Jan 22, 2026 at 07:15:30AM +0000, Xin Wang wrote: > > > Switch Xe‑related libraries and tests to use fd‑based intel_gen() and > > > intel_graphics_ver() instead of PCI ID lookups, keeping behavior aligned > > > with Xe IP disaggregation. > > You might want to mention the specific special cases that aren't > > transitioned over and will remain on pciid-based lookup so that > > reviewers can grep the resulting tree and make sure nothing was missed. > > I just did a grep and it seems like there are still quite a few tests > > using the pciid-based lookup which probably don't need to be; those > > might be oversights: > Regarding the i915-only tests: this was intentional. For i915 devices, > the new fd-based helpers simply fall back to the same pciid-based lookup > internally, so switching those callers would add overhead without any > real benefit. Additionally, much of the i915-specific code is in > maintenance mode and not actively being updated, so I preferred to leave > it as-is. My worry is that if we have two different APIs that are widely used throughout the codebase, then it's more likely people will have trouble understanding the difference and will just copy/paste the wrong one when writing new code. Migrating everything relevant (even the i915-specific tests) over to the fd-based API makes it more obvious that's the API that should be used almost everywhere going forward. Then it's also easier to audit the small number of places where the devid API is used and make sure that they're truly the places where it's needed (e.g., things running without an active driver, and a couple ancient tools). > > More broadly, there are also cases (e.g. standalone tools that run > before the DRM driver is loaded) where the pciid-based approach is the > only viable option. So I think it makes sense to keep > intel_gen_from_pciid() / intel_graphics_ver_from_pciid() as > legitimate APIs rather than treating them as purely transitional. Yes, I agree. We can't get rid of these APIs entirely, although there's only a very small number of special cases like this where they're truly necessary now. > > Does that reasoning make sense to you, or do you still think we should > aim for a full migration across all callers? Yeah, I understand your position too. Ultimately it's probably something that the IGT maintainers should make the decision on since they're the ones who are truly on the hook to manage future development and maintain the IGT codebase going forward. If they see one way or the other as a lower burden for them, that's probably the most important thing. > > > $ grep -Irl intel_gen_legacy tests/ > > tests/prime_vgem.c > > tests/intel/gem_exec_fair.c > > tests/intel/gen7_exec_parse.c > > tests/intel/gem_linear_blits.c > > tests/intel/gem_evict_alignment.c > > tests/intel/gem_exec_store.c > > tests/intel/gem_exec_flush.c > > tests/intel/i915_getparams_basic.c > > tests/intel/i915_pm_rpm.c > > tests/intel/gem_mmap_gtt.c > > tests/intel/gem_softpin.c > > tests/intel/gem_sync.c > > tests/intel/gem_tiled_fence_blits.c > > tests/intel/gem_close_race.c > > tests/intel/gem_tiling_max_stride.c > > tests/intel/gem_ctx_isolation.c > > tests/intel/gem_exec_nop.c > > tests/intel/gem_evict_everything.c > > tests/intel/perf_pmu.c > > tests/intel/sysfs_timeslice_duration.c > > tests/intel/gem_ctx_shared.c > > tests/intel/gem_ctx_engines.c > > tests/intel/gem_exec_fence.c > > tests/intel/gem_exec_balancer.c > > tests/intel/gem_exec_latency.c > > tests/intel/gem_exec_schedule.c > > tests/intel/gem_gtt_hog.c > > tests/intel/gem_blits.c > > tests/intel/gem_exec_await.c > > tests/intel/gem_exec_capture.c > > tests/intel/gem_ringfill.c > > tests/intel/perf.c > > tests/intel/gem_exec_params.c > > tests/intel/sysfs_preempt_timeout.c > > tests/intel/gem_exec_suspend.c > > tests/intel/gem_exec_reloc.c > > tests/intel/gem_exec_whisper.c > > tests/intel/gem_exec_gttfill.c > > tests/intel/gem_exec_parallel.c > > tests/intel/gem_watchdog.c > > tests/intel/gem_exec_big.c > > tests/intel/gem_set_tiling_vs_blt.c > > tests/intel/gem_render_copy.c > > tests/intel/gen9_exec_parse.c > > tests/intel/gem_vm_create.c > > tests/intel/i915_pm_rc6_residency.c > > tests/intel/i915_module_load.c > > tests/intel/gem_streaming_writes.c > > tests/intel/gem_fenced_exec_thrash.c > > tests/intel/gem_workarounds.c > > tests/intel/gem_ctx_create.c > > tests/intel/i915_pm_sseu.c > > tests/intel/gem_concurrent_all.c > > tests/intel/gem_ctx_sseu.c > > tests/intel/gem_read_read_speed.c > > tests/intel/api_intel_bb.c > > tests/intel/gem_bad_reloc.c > > tests/intel/gem_media_vme.c > > tests/intel/gem_exec_async.c > > tests/intel/gem_userptr_blits.c > > tests/intel/gem_eio.c > > > > An alternate approach would be to structure this series as: > > > > - Create the "legacy" functions as a duplicate of the existing > > pciid-based functions and explicitly convert the special cases that > > we expect to remain on PCI ID. > > > > - Change the signature of intel_graphics_ver / intel_gen and all > > remaining callsites. > > > > That will ensure everything gets converted over (otherwise there will be > > a build failure because anything not converted will be trying to use the > > wrong function signature). It also makes it a little bit easier to > > directly review the special cases and make sure they all truly need to > > be special cases. > > > > > > As mentioned on the first patch, if you're using something like > > Coccinelle to do these conversions, providing the semantic patch(es) > > used in the commit message would be helpful. > Regarding the suggestion to use Coccinelle: some of the changes in > this patch cannot be handled automatically by a script because they > involve function signature modifications. For example: > -static void basic_inst(int fd, int inst_type, > - struct drm_xe_engine_class_instance *eci, > - uint16_t dev_id) > +static void basic_inst(int fd, int inst_type, > + struct drm_xe_engine_class_instance *eci) > This requires updating the function definition, its internal uses of > dev_id, and all call sites simultaneously, which is beyond what > Coccinelle can handle across function boundaries. Yeah, Coccinelle is great, but it's not perfect so there are definitely some limitations. Last time I checked it also gets confused in some places of the IGT codebase due to the use of #define's that its parser can't understand. But sometimes it's possible to provide a Cocci-generated patch that does 90% of the necessary work (and can be easily re-generated on demand) and then a simpler human-crafted patch that makes the final 10% of changes that Coccinelle missed. That can make things easier for reviewers as well. Of course that all assumes the Coccinelle patch doesn't leave the tree in a broken intermediate state. Matt > > Xin > > > > > Matt > > > > > Cc: Kamil Konieczny <kamil.konieczny@linux.intel.com> > > > Cc: Matt Roper <matthew.d.roper@intel.com> > > > Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com> > > > Cc: Ravi Kumar V <ravi.kumar.vodapalli@intel.com> > > > Signed-off-by: Xin Wang <x.wang@intel.com> > > > --- > > > lib/gpgpu_shader.c | 5 ++- > > > lib/gpu_cmds.c | 21 ++++++----- > > > lib/intel_batchbuffer.c | 14 +++----- > > > lib/intel_blt.c | 21 +++++------ > > > lib/intel_blt.h | 2 +- > > > lib/intel_bufops.c | 10 +++--- > > > lib/intel_common.c | 2 +- > > > lib/intel_compute.c | 6 ++-- > > > lib/intel_mocs.c | 48 ++++++++++++++------------ > > > lib/intel_pat.c | 19 +++++----- > > > lib/rendercopy_gen9.c | 22 ++++++------ > > > lib/xe/xe_legacy.c | 2 +- > > > lib/xe/xe_spin.c | 4 +-- > > > lib/xe/xe_sriov_provisioning.c | 4 +-- > > > tests/intel/api_intel_allocator.c | 2 +- > > > tests/intel/kms_ccs.c | 13 +++---- > > > tests/intel/kms_fbcon_fbt.c | 2 +- > > > tests/intel/kms_frontbuffer_tracking.c | 6 ++-- > > > tests/intel/kms_pipe_stress.c | 4 +-- > > > tests/intel/xe_ccs.c | 16 ++++----- > > > tests/intel/xe_compute.c | 8 ++--- > > > tests/intel/xe_copy_basic.c | 6 ++-- > > > tests/intel/xe_debugfs.c | 3 +- > > > tests/intel/xe_eudebug_online.c | 8 ++--- > > > tests/intel/xe_exec_multi_queue.c | 2 +- > > > tests/intel/xe_exec_store.c | 18 ++++------ > > > tests/intel/xe_fault_injection.c | 8 ++--- > > > tests/intel/xe_intel_bb.c | 7 ++-- > > > tests/intel/xe_multigpu_svm.c | 3 +- > > > tests/intel/xe_pat.c | 16 ++++----- > > > tests/intel/xe_query.c | 4 +-- > > > tests/intel/xe_render_copy.c | 2 +- > > > 32 files changed, 135 insertions(+), 173 deletions(-) > > > > > > diff --git a/lib/gpgpu_shader.c b/lib/gpgpu_shader.c > > > index 767bddb7b..09a7f5c5e 100644 > > > --- a/lib/gpgpu_shader.c > > > +++ b/lib/gpgpu_shader.c > > > @@ -274,11 +274,10 @@ void gpgpu_shader_exec(struct intel_bb *ibb, > > > struct gpgpu_shader *gpgpu_shader_create(int fd) > > > { > > > struct gpgpu_shader *shdr = calloc(1, sizeof(struct gpgpu_shader)); > > > - const struct intel_device_info *info; > > > + unsigned ip_ver = intel_graphics_ver(fd); > > > igt_assert(shdr); > > > - info = intel_get_device_info(intel_get_drm_devid(fd)); > > > - shdr->gen_ver = 100 * info->graphics_ver + info->graphics_rel; > > > + shdr->gen_ver = 100 * (ip_ver >> 8) + (ip_ver & 0xff); > > > shdr->max_size = 16 * 4; > > > shdr->code = malloc(4 * shdr->max_size); > > > shdr->labels = igt_map_create(igt_map_hash_32, igt_map_equal_32); > > > diff --git a/lib/gpu_cmds.c b/lib/gpu_cmds.c > > > index ab46fe0de..6842af1ad 100644 > > > --- a/lib/gpu_cmds.c > > > +++ b/lib/gpu_cmds.c > > > @@ -313,14 +313,13 @@ fill_binding_table(struct intel_bb *ibb, struct intel_buf *buf) > > > { > > > uint32_t binding_table_offset; > > > uint32_t *binding_table; > > > - uint32_t devid = intel_get_drm_devid(ibb->fd); > > > intel_bb_ptr_align(ibb, 64); > > > binding_table_offset = intel_bb_offset(ibb); > > > binding_table = intel_bb_ptr(ibb); > > > intel_bb_ptr_add(ibb, 64); > > > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { > > > + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) { > > > /* > > > * Up until now, SURFACEFORMAT_R8_UNROM was used regardless of the 'bpp' value. > > > * For bpp 32 this results in a surface that is 4x narrower than expected. However > > > @@ -342,13 +341,13 @@ fill_binding_table(struct intel_bb *ibb, struct intel_buf *buf) > > > igt_assert_f(false, > > > "Surface state for bpp = %u not implemented", > > > buf->bpp); > > > - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 50)) { > > > + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(12, 50)) { > > > binding_table[0] = xehp_fill_surface_state(ibb, buf, > > > SURFACEFORMAT_R8_UNORM, 1); > > > - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(9, 0)) { > > > + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(9, 0)) { > > > binding_table[0] = gen9_fill_surface_state(ibb, buf, > > > SURFACEFORMAT_R8_UNORM, 1); > > > - } else if (intel_graphics_ver_legacy(devid) >= IP_VER(8, 0)) { > > > + } else if (intel_graphics_ver(ibb->fd) >= IP_VER(8, 0)) { > > > binding_table[0] = gen8_fill_surface_state(ibb, buf, > > > SURFACEFORMAT_R8_UNORM, 1); > > > } else { > > > @@ -867,7 +866,7 @@ gen_emit_media_object(struct intel_bb *ibb, > > > /* inline data (xoffset, yoffset) */ > > > intel_bb_out(ibb, xoffset); > > > intel_bb_out(ibb, yoffset); > > > - if (intel_gen_legacy(ibb->devid) >= 8 && !IS_CHERRYVIEW(ibb->devid)) > > > + if (intel_gen(ibb->fd) >= 8 && !IS_CHERRYVIEW(ibb->devid)) > > > gen8_emit_media_state_flush(ibb); > > > } > > > @@ -1011,7 +1010,7 @@ void > > > xehp_emit_state_compute_mode(struct intel_bb *ibb, bool vrt) > > > { > > > - uint32_t dword_length = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0); > > > + uint32_t dword_length = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0); > > > intel_bb_out(ibb, XEHP_STATE_COMPUTE_MODE | dword_length); > > > intel_bb_out(ibb, vrt ? (0x10001) << 10 : 0); /* Enable variable number of threads */ > > > @@ -1042,7 +1041,7 @@ xehp_emit_state_base_address(struct intel_bb *ibb) > > > intel_bb_out(ibb, 0); > > > /* stateless data port */ > > > - tmp = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0) ? 0 : BASE_ADDRESS_MODIFY; > > > + tmp = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0) ? 0 : BASE_ADDRESS_MODIFY; > > > intel_bb_out(ibb, 0 | tmp); //dw3 > > > /* surface */ > > > @@ -1068,7 +1067,7 @@ xehp_emit_state_base_address(struct intel_bb *ibb) > > > /* dynamic state buffer size */ > > > intel_bb_out(ibb, ALIGN(ibb->size, 1 << 12) | 1); //dw13 > > > /* indirect object buffer size */ > > > - if (intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0)) //dw14 > > > + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) //dw14 > > > intel_bb_out(ibb, 0); > > > else > > > intel_bb_out(ibb, 0xfffff000 | 1); > > > @@ -1115,7 +1114,7 @@ xehp_emit_compute_walk(struct intel_bb *ibb, > > > else > > > mask = (1 << mask) - 1; > > > - dword_length = intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0) ? 0x26 : 0x25; > > > + dword_length = intel_graphics_ver(ibb->fd) >= IP_VER(20, 0) ? 0x26 : 0x25; > > > intel_bb_out(ibb, XEHP_COMPUTE_WALKER | dword_length); > > > intel_bb_out(ibb, 0); /* debug object */ //dw1 > > > @@ -1155,7 +1154,7 @@ xehp_emit_compute_walk(struct intel_bb *ibb, > > > intel_bb_out(ibb, 0); //dw16 > > > intel_bb_out(ibb, 0); //dw17 > > > - if (intel_graphics_ver_legacy(ibb->devid) >= IP_VER(20, 0)) //Xe2:dw18 > > > + if (intel_graphics_ver(ibb->fd) >= IP_VER(20, 0)) //Xe2:dw18 > > > intel_bb_out(ibb, 0); > > > /* Interface descriptor data */ > > > for (int i = 0; i < 8; i++) { //dw18-25 (Xe2:dw19-26) > > > diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c > > > index f418e7981..4f52e7b6a 100644 > > > --- a/lib/intel_batchbuffer.c > > > +++ b/lib/intel_batchbuffer.c > > > @@ -329,11 +329,7 @@ void igt_blitter_copy(int fd, > > > uint32_t dst_x, uint32_t dst_y, > > > uint64_t dst_size) > > > { > > > - uint32_t devid; > > > - > > > - devid = intel_get_drm_devid(fd); > > > - > > > - if (intel_graphics_ver_legacy(devid) >= IP_VER(12, 60)) > > > + if (intel_graphics_ver(fd) >= IP_VER(12, 60)) > > > igt_blitter_fast_copy__raw(fd, ahnd, ctx, NULL, > > > src_handle, src_delta, > > > src_stride, src_tiling, > > > @@ -410,7 +406,7 @@ void igt_blitter_src_copy(int fd, > > > uint32_t batch_handle; > > > uint32_t src_pitch, dst_pitch; > > > uint32_t dst_reloc_offset, src_reloc_offset; > > > - uint32_t gen = intel_gen_legacy(intel_get_drm_devid(fd)); > > > + uint32_t gen = intel_gen(fd); > > > uint64_t batch_offset, src_offset, dst_offset; > > > const bool has_64b_reloc = gen >= 8; > > > int i = 0; > > > @@ -669,7 +665,7 @@ igt_render_copyfunc_t igt_get_render_copyfunc(int fd) > > > copy = mtl_render_copyfunc; > > > else if (IS_DG2(devid)) > > > copy = gen12p71_render_copyfunc; > > > - else if (intel_gen_legacy(devid) >= 20) > > > + else if (intel_gen(fd) >= 20) > > > copy = xe2_render_copyfunc; > > > else if (IS_GEN12(devid)) > > > copy = gen12_render_copyfunc; > > > @@ -911,7 +907,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg, > > > igt_assert(ibb); > > > ibb->devid = intel_get_drm_devid(fd); > > > - ibb->gen = intel_gen_legacy(ibb->devid); > > > + ibb->gen = intel_gen(fd); > > > ibb->ctx = ctx; > > > ibb->fd = fd; > > > @@ -1089,7 +1085,7 @@ struct intel_bb *intel_bb_create_with_allocator(int fd, uint32_t ctx, uint32_t v > > > static bool aux_needs_softpin(int fd) > > > { > > > - return intel_gen_legacy(intel_get_drm_devid(fd)) >= 12; > > > + return intel_gen(fd) >= 12; > > > } > > > static bool has_ctx_cfg(struct intel_bb *ibb) > > > diff --git a/lib/intel_blt.c b/lib/intel_blt.c > > > index 673f204b0..7ae04fccd 100644 > > > --- a/lib/intel_blt.c > > > +++ b/lib/intel_blt.c > > > @@ -997,7 +997,7 @@ uint64_t emit_blt_block_copy(int fd, > > > uint64_t bb_pos, > > > bool emit_bbe) > > > { > > > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > > > + unsigned int ip_ver = intel_graphics_ver(fd); > > > struct gen12_block_copy_data data = {}; > > > struct gen12_block_copy_data_ext dext = {}; > > > uint64_t dst_offset, src_offset, bb_offset; > > > @@ -1285,7 +1285,7 @@ uint64_t emit_blt_ctrl_surf_copy(int fd, > > > uint64_t bb_pos, > > > bool emit_bbe) > > > { > > > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > > > + unsigned int ip_ver = intel_graphics_ver(fd); > > > union ctrl_surf_copy_data data = { }; > > > size_t data_sz; > > > uint64_t dst_offset, src_offset, bb_offset, alignment; > > > @@ -1705,7 +1705,7 @@ uint64_t emit_blt_fast_copy(int fd, > > > uint64_t bb_pos, > > > bool emit_bbe) > > > { > > > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > > > + unsigned int ip_ver = intel_graphics_ver(fd); > > > struct gen12_fast_copy_data data = {}; > > > uint64_t dst_offset, src_offset, bb_offset; > > > uint32_t bbe = MI_BATCH_BUFFER_END; > > > @@ -1972,11 +1972,10 @@ void blt_mem_copy_init(int fd, struct blt_mem_copy_data *mem, > > > static void dump_bb_mem_copy_cmd(int fd, struct xe_mem_copy_data *data) > > > { > > > uint32_t *cmd = (uint32_t *) data; > > > - uint32_t devid = intel_get_drm_devid(fd); > > > igt_info("BB details:\n"); > > > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { > > > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { > > > igt_info(" dw00: [%08x] <client: 0x%x, opcode: 0x%x, length: %d> " > > > "[copy type: %d, mode: %d]\n", > > > cmd[0], data->dw00.xe2.client, data->dw00.xe2.opcode, > > > @@ -2006,7 +2005,7 @@ static void dump_bb_mem_copy_cmd(int fd, struct xe_mem_copy_data *data) > > > cmd[7], data->dw07.dst_address_lo); > > > igt_info(" dw08: [%08x] dst offset hi (0x%x)\n", > > > cmd[8], data->dw08.dst_address_hi); > > > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { > > > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { > > > igt_info(" dw09: [%08x] mocs <dst: 0x%x, src: 0x%x>\n", > > > cmd[9], data->dw09.xe2.dst_mocs, > > > data->dw09.xe2.src_mocs); > > > @@ -2025,7 +2024,6 @@ static uint64_t emit_blt_mem_copy(int fd, uint64_t ahnd, > > > uint64_t dst_offset, src_offset, shift; > > > uint32_t width, height, width_max, height_max, remain; > > > uint32_t bbe = MI_BATCH_BUFFER_END; > > > - uint32_t devid = intel_get_drm_devid(fd); > > > uint8_t *bb; > > > if (mem->mode == MODE_BYTE) { > > > @@ -2049,7 +2047,7 @@ static uint64_t emit_blt_mem_copy(int fd, uint64_t ahnd, > > > width = mem->src.width; > > > height = mem->dst.height; > > > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) { > > > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { > > > data.dw00.xe2.client = 0x2; > > > data.dw00.xe2.opcode = 0x5a; > > > data.dw00.xe2.length = 8; > > > @@ -2231,7 +2229,6 @@ static void emit_blt_mem_set(int fd, uint64_t ahnd, > > > int b; > > > uint32_t *batch; > > > uint32_t value; > > > - uint32_t devid = intel_get_drm_devid(fd); > > > dst_offset = get_offset_pat_index(ahnd, mem->dst.handle, mem->dst.size, > > > 0, mem->dst.pat_index); > > > @@ -2246,7 +2243,7 @@ static void emit_blt_mem_set(int fd, uint64_t ahnd, > > > batch[b++] = mem->dst.pitch - 1; > > > batch[b++] = dst_offset; > > > batch[b++] = dst_offset << 32; > > > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) > > > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) > > > batch[b++] = value | (mem->dst.mocs_index << 3); > > > else > > > batch[b++] = value | mem->dst.mocs_index; > > > @@ -2364,7 +2361,7 @@ blt_create_object(const struct blt_copy_data *blt, uint32_t region, > > > if (create_mapping && region != system_memory(blt->fd)) > > > flags |= DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM; > > > - if (intel_gen_legacy(intel_get_drm_devid(blt->fd)) >= 20 && compression) { > > > + if (intel_gen(blt->fd) >= 20 && compression) { > > > pat_index = intel_get_pat_idx_uc_comp(blt->fd); > > > cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; > > > } > > > @@ -2590,7 +2587,7 @@ void blt_surface_get_flatccs_data(int fd, > > > cpu_caching = __xe_default_cpu_caching(fd, sysmem, 0); > > > ccs_bo_size = ALIGN(ccssize, xe_get_default_alignment(fd)); > > > - if (intel_gen_legacy(intel_get_drm_devid(fd)) >= 20 && obj->compression) { > > > + if (intel_gen(fd) >= 20 && obj->compression) { > > > comp_pat_index = intel_get_pat_idx_uc_comp(fd); > > > cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; > > > } > > > diff --git a/lib/intel_blt.h b/lib/intel_blt.h > > > index a98a34e95..feba94ebb 100644 > > > --- a/lib/intel_blt.h > > > +++ b/lib/intel_blt.h > > > @@ -52,7 +52,7 @@ > > > #include "igt.h" > > > #include "intel_cmds_info.h" > > > -#define CCS_RATIO(fd) (intel_gen_legacy(intel_get_drm_devid(fd)) >= 20 ? 512 : 256) > > > +#define CCS_RATIO(fd) (intel_gen(fd) >= 20 ? 512 : 256) > > > #define GEN12_MEM_COPY_MOCS_SHIFT 25 > > > #define XE2_MEM_COPY_SRC_MOCS_SHIFT 28 > > > #define XE2_MEM_COPY_DST_MOCS_SHIFT 3 > > > diff --git a/lib/intel_bufops.c b/lib/intel_bufops.c > > > index ea3742f1e..a2adbf9ef 100644 > > > --- a/lib/intel_bufops.c > > > +++ b/lib/intel_bufops.c > > > @@ -1063,7 +1063,7 @@ static void __intel_buf_init(struct buf_ops *bops, > > > } else { > > > uint16_t cpu_caching = __xe_default_cpu_caching(bops->fd, region, 0); > > > - if (intel_gen_legacy(bops->devid) >= 20 && compression) > > > + if (intel_gen(bops->fd) >= 20 && compression) > > > cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; > > > bo_size = ALIGN(bo_size, xe_get_default_alignment(bops->fd)); > > > @@ -1106,7 +1106,7 @@ void intel_buf_init(struct buf_ops *bops, > > > uint64_t region; > > > uint8_t pat_index = DEFAULT_PAT_INDEX; > > > - if (compression && intel_gen_legacy(bops->devid) >= 20) > > > + if (compression && intel_gen(bops->fd) >= 20) > > > pat_index = intel_get_pat_idx_uc_comp(bops->fd); > > > region = bops->driver == INTEL_DRIVER_I915 ? I915_SYSTEM_MEMORY : > > > @@ -1132,7 +1132,7 @@ void intel_buf_init_in_region(struct buf_ops *bops, > > > { > > > uint8_t pat_index = DEFAULT_PAT_INDEX; > > > - if (compression && intel_gen_legacy(bops->devid) >= 20) > > > + if (compression && intel_gen(bops->fd) >= 20) > > > pat_index = intel_get_pat_idx_uc_comp(bops->fd); > > > __intel_buf_init(bops, 0, buf, width, height, bpp, alignment, > > > @@ -1203,7 +1203,7 @@ void intel_buf_init_using_handle_and_size(struct buf_ops *bops, > > > igt_assert(handle); > > > igt_assert(size); > > > - if (compression && intel_gen_legacy(bops->devid) >= 20) > > > + if (compression && intel_gen(bops->fd) >= 20) > > > pat_index = intel_get_pat_idx_uc_comp(bops->fd); > > > __intel_buf_init(bops, handle, buf, width, height, bpp, alignment, > > > @@ -1758,7 +1758,7 @@ static struct buf_ops *__buf_ops_create(int fd, bool check_idempotency) > > > igt_assert(bops); > > > devid = intel_get_drm_devid(fd); > > > - generation = intel_gen_legacy(devid); > > > + generation = intel_gen(fd); > > > /* Predefined settings: see intel_device_info? */ > > > for (int i = 0; i < ARRAY_SIZE(buf_ops_arr); i++) { > > > diff --git a/lib/intel_common.c b/lib/intel_common.c > > > index cd1019bfe..407d53f77 100644 > > > --- a/lib/intel_common.c > > > +++ b/lib/intel_common.c > > > @@ -91,7 +91,7 @@ bool is_intel_region_compressible(int fd, uint64_t region) > > > return true; > > > /* Integrated Xe2+ supports compression on system memory */ > > > - if (intel_gen_legacy(devid) >= 20 && !is_dgfx && is_intel_system_region(fd, region)) > > > + if (intel_gen(fd) >= 20 && !is_dgfx && is_intel_system_region(fd, region)) > > > return true; > > > /* Discrete supports compression on vram */ > > > diff --git a/lib/intel_compute.c b/lib/intel_compute.c > > > index 1734c1649..66156d194 100644 > > > --- a/lib/intel_compute.c > > > +++ b/lib/intel_compute.c > > > @@ -2284,7 +2284,7 @@ static bool __run_intel_compute_kernel(int fd, > > > struct user_execenv *user, > > > enum execenv_alloc_prefs alloc_prefs) > > > { > > > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > > > + unsigned int ip_ver = intel_graphics_ver(fd); > > > int batch; > > > const struct intel_compute_kernels *kernel_entries = intel_compute_square_kernels, *kernels; > > > enum intel_driver driver = get_intel_driver(fd); > > > @@ -2749,7 +2749,7 @@ static bool __run_intel_compute_kernel_preempt(int fd, > > > bool threadgroup_preemption, > > > enum execenv_alloc_prefs alloc_prefs) > > > { > > > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > > > + unsigned int ip_ver = ip_ver = intel_graphics_ver(fd); > > > int batch; > > > const struct intel_compute_kernels *kernel_entries = intel_compute_square_kernels, *kernels; > > > enum intel_driver driver = get_intel_driver(fd); > > > @@ -2803,7 +2803,7 @@ static bool __run_intel_compute_kernel_preempt(int fd, > > > */ > > > bool xe_kernel_preempt_check(int fd, enum xe_compute_preempt_type required_preempt) > > > { > > > - unsigned int ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > > > + unsigned int ip_ver = ip_ver = intel_graphics_ver(fd); > > > int batch = find_preempt_batch(ip_ver); > > > if (batch < 0) { > > > diff --git a/lib/intel_mocs.c b/lib/intel_mocs.c > > > index f21c2bf09..b9ea43c7c 100644 > > > --- a/lib/intel_mocs.c > > > +++ b/lib/intel_mocs.c > > > @@ -27,8 +27,8 @@ struct drm_intel_mocs_index { > > > static void get_mocs_index(int fd, struct drm_intel_mocs_index *mocs) > > > { > > > - uint16_t devid = intel_get_drm_devid(fd); > > > - unsigned int ip_ver = intel_graphics_ver_legacy(devid); > > > + uint16_t devid; > > > + unsigned int ip_ver = intel_graphics_ver(fd); > > > /* > > > * Gen >= 12 onwards don't have a setting for PTE, > > > @@ -42,26 +42,29 @@ static void get_mocs_index(int fd, struct drm_intel_mocs_index *mocs) > > > mocs->wb_index = 4; > > > mocs->displayable_index = 1; > > > mocs->defer_to_pat_index = 0; > > > - } else if (IS_METEORLAKE(devid)) { > > > - mocs->uc_index = 5; > > > - mocs->wb_index = 1; > > > - mocs->displayable_index = 14; > > > - } else if (IS_DG2(devid)) { > > > - mocs->uc_index = 1; > > > - mocs->wb_index = 3; > > > - mocs->displayable_index = 3; > > > - } else if (IS_DG1(devid)) { > > > - mocs->uc_index = 1; > > > - mocs->wb_index = 5; > > > - mocs->displayable_index = 5; > > > - } else if (ip_ver >= IP_VER(12, 0)) { > > > - mocs->uc_index = 3; > > > - mocs->wb_index = 2; > > > - mocs->displayable_index = 61; > > > } else { > > > - mocs->uc_index = I915_MOCS_PTE; > > > - mocs->wb_index = I915_MOCS_CACHED; > > > - mocs->displayable_index = I915_MOCS_PTE; > > > + devid = intel_get_drm_devid(fd); > > > + if (IS_METEORLAKE(devid)) { > > > + mocs->uc_index = 5; > > > + mocs->wb_index = 1; > > > + mocs->displayable_index = 14; > > > + } else if (IS_DG2(devid)) { > > > + mocs->uc_index = 1; > > > + mocs->wb_index = 3; > > > + mocs->displayable_index = 3; > > > + } else if (IS_DG1(devid)) { > > > + mocs->uc_index = 1; > > > + mocs->wb_index = 5; > > > + mocs->displayable_index = 5; > > > + } else if (ip_ver >= IP_VER(12, 0)) { > > > + mocs->uc_index = 3; > > > + mocs->wb_index = 2; > > > + mocs->displayable_index = 61; > > > + } else { > > > + mocs->uc_index = I915_MOCS_PTE; > > > + mocs->wb_index = I915_MOCS_CACHED; > > > + mocs->displayable_index = I915_MOCS_PTE; > > > + } > > > } > > > } > > > @@ -124,9 +127,8 @@ uint8_t intel_get_displayable_mocs_index(int fd) > > > uint8_t intel_get_defer_to_pat_mocs_index(int fd) > > > { > > > struct drm_intel_mocs_index mocs; > > > - uint16_t dev_id = intel_get_drm_devid(fd); > > > - igt_assert(intel_gen_legacy(dev_id) >= 20); > > > + igt_assert(intel_gen(fd) >= 20); > > > get_mocs_index(fd, &mocs); > > > diff --git a/lib/intel_pat.c b/lib/intel_pat.c > > > index 9a61c2a45..9bb4800b6 100644 > > > --- a/lib/intel_pat.c > > > +++ b/lib/intel_pat.c > > > @@ -96,14 +96,12 @@ int32_t xe_get_pat_sw_config(int drm_fd, struct intel_pat_cache *xe_pat_cache) > > > static void intel_get_pat_idx(int fd, struct intel_pat_cache *pat) > > > { > > > - uint16_t dev_id = intel_get_drm_devid(fd); > > > - > > > - if (intel_graphics_ver_legacy(dev_id) == IP_VER(35, 11)) { > > > + if (intel_graphics_ver(fd) == IP_VER(35, 11)) { > > > pat->uc = 3; > > > pat->wb = 2; > > > pat->max_index = 31; > > > - } else if (intel_get_device_info(dev_id)->graphics_ver == 30 || > > > - intel_get_device_info(dev_id)->graphics_ver == 20) { > > > + } else if (intel_gen(fd) == 30 || > > > + intel_gen(fd) == 20) { > > > pat->uc = 3; > > > pat->wt = 15; /* Compressed + WB-transient */ > > > pat->wb = 2; > > > @@ -111,19 +109,19 @@ static void intel_get_pat_idx(int fd, struct intel_pat_cache *pat) > > > pat->max_index = 31; > > > /* Wa_16023588340: CLOS3 entries at end of table are unusable */ > > > - if (intel_graphics_ver_legacy(dev_id) == IP_VER(20, 1)) > > > + if (intel_graphics_ver(fd) == IP_VER(20, 1)) > > > pat->max_index -= 4; > > > - } else if (IS_METEORLAKE(dev_id)) { > > > + } else if (IS_METEORLAKE(intel_get_drm_devid(fd))) { > > > pat->uc = 2; > > > pat->wt = 1; > > > pat->wb = 3; > > > pat->max_index = 3; > > > - } else if (IS_PONTEVECCHIO(dev_id)) { > > > + } else if (IS_PONTEVECCHIO(intel_get_drm_devid(fd))) { > > > pat->uc = 0; > > > pat->wt = 2; > > > pat->wb = 3; > > > pat->max_index = 7; > > > - } else if (intel_graphics_ver_legacy(dev_id) <= IP_VER(12, 60)) { > > > + } else if (intel_graphics_ver(fd) <= IP_VER(12, 60)) { > > > pat->uc = 3; > > > pat->wt = 2; > > > pat->wb = 0; > > > @@ -152,9 +150,8 @@ uint8_t intel_get_pat_idx_uc(int fd) > > > uint8_t intel_get_pat_idx_uc_comp(int fd) > > > { > > > struct intel_pat_cache pat = {}; > > > - uint16_t dev_id = intel_get_drm_devid(fd); > > > - igt_assert(intel_gen_legacy(dev_id) >= 20); > > > + igt_assert(intel_gen(fd) >= 20); > > > intel_get_pat_idx(fd, &pat); > > > return pat.uc_comp; > > > diff --git a/lib/rendercopy_gen9.c b/lib/rendercopy_gen9.c > > > index 66415212c..0be557a47 100644 > > > --- a/lib/rendercopy_gen9.c > > > +++ b/lib/rendercopy_gen9.c > > > @@ -256,12 +256,12 @@ gen9_bind_buf(struct intel_bb *ibb, const struct intel_buf *buf, int is_dst, > > > if (buf->compression == I915_COMPRESSION_MEDIA) > > > ss->ss7.tgl.media_compression = 1; > > > else if (buf->compression == I915_COMPRESSION_RENDER) { > > > - if (intel_gen_legacy(ibb->devid) >= 20) > > > + if (intel_gen(ibb->fd) >= 20) > > > ss->ss6.aux_mode = 0x0; /* AUX_NONE, unified compression */ > > > else > > > ss->ss6.aux_mode = 0x5; /* AUX_CCS_E */ > > > - if (intel_gen_legacy(ibb->devid) < 12 && buf->ccs[0].stride) { > > > + if (intel_gen(ibb->fd) < 12 && buf->ccs[0].stride) { > > > ss->ss6.aux_pitch = (buf->ccs[0].stride / 128) - 1; > > > address = intel_bb_offset_reloc_with_delta(ibb, buf->handle, > > > @@ -303,7 +303,7 @@ gen9_bind_buf(struct intel_bb *ibb, const struct intel_buf *buf, int is_dst, > > > ss->ss7.dg2.disable_support_for_multi_gpu_partial_writes = 1; > > > ss->ss7.dg2.disable_support_for_multi_gpu_atomics = 1; > > > - if (intel_gen_legacy(ibb->devid) >= 20) > > > + if (intel_gen(ibb->fd) >= 20) > > > ss->ss12.lnl.compression_format = lnl_compression_format(buf); > > > else > > > ss->ss12.dg2.compression_format = dg2_compression_format(buf); > > > @@ -681,7 +681,7 @@ gen9_emit_state_base_address(struct intel_bb *ibb) { > > > /* WaBindlessSurfaceStateModifyEnable:skl,bxt */ > > > /* The length has to be one less if we dont modify > > > bindless state */ > > > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) > > > + if (intel_gen(ibb->fd) >= 20) > > > intel_bb_out(ibb, GEN4_STATE_BASE_ADDRESS | 20); > > > else > > > intel_bb_out(ibb, GEN4_STATE_BASE_ADDRESS | (19 - 1 - 2)); > > > @@ -726,7 +726,7 @@ gen9_emit_state_base_address(struct intel_bb *ibb) { > > > intel_bb_out(ibb, 0); > > > intel_bb_out(ibb, 0); > > > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) { > > > + if (intel_gen(ibb->fd) >= 20) { > > > /* Bindless sampler */ > > > intel_bb_out(ibb, 0); > > > intel_bb_out(ibb, 0); > > > @@ -899,7 +899,7 @@ gen9_emit_ds(struct intel_bb *ibb) { > > > static void > > > gen8_emit_wm_hz_op(struct intel_bb *ibb) { > > > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) { > > > + if (intel_gen(ibb->fd) >= 20) { > > > intel_bb_out(ibb, GEN8_3DSTATE_WM_HZ_OP | (6-2)); > > > intel_bb_out(ibb, 0); > > > } else { > > > @@ -989,7 +989,7 @@ gen8_emit_ps(struct intel_bb *ibb, uint32_t kernel, bool fast_clear) { > > > intel_bb_out(ibb, 0); > > > intel_bb_out(ibb, GEN7_3DSTATE_PS | (12-2)); > > > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) > > > + if (intel_gen(ibb->fd) >= 20) > > > intel_bb_out(ibb, kernel | 1); > > > else > > > intel_bb_out(ibb, kernel); > > > @@ -1006,7 +1006,7 @@ gen8_emit_ps(struct intel_bb *ibb, uint32_t kernel, bool fast_clear) { > > > intel_bb_out(ibb, (max_threads - 1) << GEN8_3DSTATE_PS_MAX_THREADS_SHIFT | > > > GEN6_3DSTATE_WM_16_DISPATCH_ENABLE | > > > (fast_clear ? GEN8_3DSTATE_FAST_CLEAR_ENABLE : 0)); > > > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) > > > + if (intel_gen(ibb->fd) >= 20) > > > intel_bb_out(ibb, 6 << GEN6_3DSTATE_WM_DISPATCH_START_GRF_0_SHIFT | > > > GENXE_KERNEL0_POLY_PACK16_FIXED << GENXE_KERNEL0_PACKING_POLICY); > > > else > > > @@ -1061,7 +1061,7 @@ gen9_emit_depth(struct intel_bb *ibb) > > > static void > > > gen7_emit_clear(struct intel_bb *ibb) { > > > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) > > > + if (intel_gen(ibb->fd) >= 20) > > > return; > > > intel_bb_out(ibb, GEN7_3DSTATE_CLEAR_PARAMS | (3-2)); > > > @@ -1072,7 +1072,7 @@ gen7_emit_clear(struct intel_bb *ibb) { > > > static void > > > gen6_emit_drawing_rectangle(struct intel_bb *ibb, const struct intel_buf *dst) > > > { > > > - if (intel_gen_legacy(intel_get_drm_devid(ibb->fd)) >= 20) > > > + if (intel_gen(ibb->fd) >= 20) > > > intel_bb_out(ibb, GENXE2_3DSTATE_DRAWING_RECTANGLE_FAST | (4 - 2)); > > > else > > > intel_bb_out(ibb, GEN4_3DSTATE_DRAWING_RECTANGLE | (4 - 2)); > > > @@ -1266,7 +1266,7 @@ void _gen9_render_op(struct intel_bb *ibb, > > > gen9_emit_state_base_address(ibb); > > > - if (HAS_4TILE(ibb->devid) || intel_gen_legacy(ibb->devid) > 12) { > > > + if (HAS_4TILE(ibb->devid) || intel_gen(ibb->fd) > 12) { > > > intel_bb_out(ibb, GEN4_3DSTATE_BINDING_TABLE_POOL_ALLOC | 2); > > > intel_bb_emit_reloc(ibb, ibb->handle, > > > I915_GEM_DOMAIN_RENDER | I915_GEM_DOMAIN_INSTRUCTION, 0, > > > diff --git a/lib/xe/xe_legacy.c b/lib/xe/xe_legacy.c > > > index 1529ed1cc..c1ce9fa00 100644 > > > --- a/lib/xe/xe_legacy.c > > > +++ b/lib/xe/xe_legacy.c > > > @@ -75,7 +75,7 @@ xe_legacy_test_mode(int fd, struct drm_xe_engine_class_instance *eci, > > > igt_assert_lte(n_exec_queues, MAX_N_EXECQUEUES); > > > if (flags & COMPRESSION) > > > - igt_require(intel_gen_legacy(intel_get_drm_devid(fd)) >= 20); > > > + igt_require(intel_gen(fd) >= 20); > > > if (flags & CLOSE_FD) > > > fd = drm_open_driver(DRIVER_XE); > > > diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c > > > index 36260e3e5..8ca137381 100644 > > > --- a/lib/xe/xe_spin.c > > > +++ b/lib/xe/xe_spin.c > > > @@ -54,7 +54,6 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts) > > > uint64_t pad_addr = opts->addr + offsetof(struct xe_spin, pad); > > > uint64_t timestamp_addr = opts->addr + offsetof(struct xe_spin, timestamp); > > > int b = 0; > > > - uint32_t devid; > > > spin->start = 0; > > > spin->end = 0xffffffff; > > > @@ -166,8 +165,7 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts) > > > spin->batch[b++] = opts->mem_copy->dst_offset; > > > spin->batch[b++] = opts->mem_copy->dst_offset << 32; > > > - devid = intel_get_drm_devid(opts->mem_copy->fd); > > > - if (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0)) > > > + if (intel_graphics_ver(opts->mem_copy->fd) >= IP_VER(20, 0)) > > > spin->batch[b++] = opts->mem_copy->src->mocs_index << XE2_MEM_COPY_SRC_MOCS_SHIFT | > > > opts->mem_copy->dst->mocs_index << XE2_MEM_COPY_DST_MOCS_SHIFT; > > > else > > > diff --git a/lib/xe/xe_sriov_provisioning.c b/lib/xe/xe_sriov_provisioning.c > > > index 7b60ccd6c..3d981766c 100644 > > > --- a/lib/xe/xe_sriov_provisioning.c > > > +++ b/lib/xe/xe_sriov_provisioning.c > > > @@ -50,9 +50,7 @@ const char *xe_sriov_shared_res_to_string(enum xe_sriov_shared_res res) > > > static uint64_t get_vfid_mask(int fd) > > > { > > > - uint16_t dev_id = intel_get_drm_devid(fd); > > > - > > > - return (intel_graphics_ver_legacy(dev_id) >= IP_VER(12, 50)) ? > > > + return (intel_graphics_ver(fd) >= IP_VER(12, 50)) ? > > > GGTT_PTE_VFID_MASK : PRE_1250_IP_VER_GGTT_PTE_VFID_MASK; > > > } > > > diff --git a/tests/intel/api_intel_allocator.c b/tests/intel/api_intel_allocator.c > > > index 869e5e9a0..6b1d17da7 100644 > > > --- a/tests/intel/api_intel_allocator.c > > > +++ b/tests/intel/api_intel_allocator.c > > > @@ -625,7 +625,7 @@ static void execbuf_with_allocator(int fd) > > > uint64_t ahnd, sz = 4096, gtt_size; > > > unsigned int flags = EXEC_OBJECT_PINNED; > > > uint32_t *ptr, batch[32], copied; > > > - int gen = intel_gen_legacy(intel_get_drm_devid(fd)); > > > + int gen = intel_gen(fd); > > > int i; > > > const uint32_t magic = 0x900df00d; > > > diff --git a/tests/intel/kms_ccs.c b/tests/intel/kms_ccs.c > > > index 30f2c9465..a0373316a 100644 > > > --- a/tests/intel/kms_ccs.c > > > +++ b/tests/intel/kms_ccs.c > > > @@ -565,7 +565,7 @@ static void access_flat_ccs_surface(struct igt_fb *fb, bool verify_compression) > > > uint16_t cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; > > > uint8_t uc_mocs = intel_get_uc_mocs_index(fb->fd); > > > uint8_t comp_pat_index = intel_get_pat_idx_wt(fb->fd); > > > - uint32_t region = (intel_gen_legacy(intel_get_drm_devid(fb->fd)) >= 20 && > > > + uint32_t region = (intel_gen(fb->fd) >= 20 && > > > xe_has_vram(fb->fd)) ? REGION_LMEM(0) : REGION_SMEM; > > > struct drm_xe_engine_class_instance inst = { > > > @@ -645,7 +645,7 @@ static void fill_fb_random(int drm_fd, igt_fb_t *fb) > > > igt_assert_eq(0, gem_munmap(map, fb->size)); > > > /* randomize also ccs surface on Xe2 */ > > > - if (intel_gen_legacy(intel_get_drm_devid(drm_fd)) >= 20) > > > + if (intel_gen(drm_fd) >= 20) > > > access_flat_ccs_surface(fb, false); > > > } > > > @@ -1125,11 +1125,6 @@ static bool valid_modifier_test(u64 modifier, const enum test_flags flags) > > > static void test_output(data_t *data, const int testnum) > > > { > > > - uint16_t dev_id; > > > - > > > - igt_fixture() > > > - dev_id = intel_get_drm_devid(data->drm_fd); > > > - > > > data->flags = tests[testnum].flags; > > > for (int i = 0; i < ARRAY_SIZE(ccs_modifiers); i++) { > > > @@ -1143,10 +1138,10 @@ static void test_output(data_t *data, const int testnum) > > > igt_subtest_with_dynamic_f("%s-%s", tests[testnum].testname, ccs_modifiers[i].str) { > > > if (ccs_modifiers[i].modifier == I915_FORMAT_MOD_4_TILED_BMG_CCS || > > > ccs_modifiers[i].modifier == I915_FORMAT_MOD_4_TILED_LNL_CCS) { > > > - igt_require_f(intel_gen_legacy(dev_id) >= 20, > > > + igt_require_f(intel_gen(data->drm_fd) >= 20, > > > "Xe2 platform needed.\n"); > > > } else { > > > - igt_require_f(intel_gen_legacy(dev_id) < 20, > > > + igt_require_f(intel_gen(data->drm_fd) < 20, > > > "Older than Xe2 platform needed.\n"); > > > } > > > diff --git a/tests/intel/kms_fbcon_fbt.c b/tests/intel/kms_fbcon_fbt.c > > > index edf5c0d1b..b28961417 100644 > > > --- a/tests/intel/kms_fbcon_fbt.c > > > +++ b/tests/intel/kms_fbcon_fbt.c > > > @@ -179,7 +179,7 @@ static bool fbc_wait_until_update(struct drm_info *drm) > > > * For older GENs FBC is still expected to be disabled as it still > > > * relies on a tiled and fenceable framebuffer to track modifications. > > > */ > > > - if (intel_gen_legacy(intel_get_drm_devid(drm->fd)) >= 9) { > > > + if (intel_gen(drm->fd) >= 9) { > > > if (!fbc_wait_until_enabled(drm->debugfs_fd)) > > > return false; > > > /* > > > diff --git a/tests/intel/kms_frontbuffer_tracking.c b/tests/intel/kms_frontbuffer_tracking.c > > > index c8c2ce240..5b60587db 100644 > > > --- a/tests/intel/kms_frontbuffer_tracking.c > > > +++ b/tests/intel/kms_frontbuffer_tracking.c > > > @@ -3062,13 +3062,13 @@ static bool tiling_is_valid(int feature_flags, enum tiling_type tiling) > > > switch (tiling) { > > > case TILING_LINEAR: > > > - return intel_gen_legacy(drm.devid) >= 9; > > > + return intel_gen(drm.fd) >= 9; > > > case TILING_X: > > > return (intel_get_device_info(drm.devid)->display_ver > 29) ? false : true; > > > case TILING_Y: > > > return true; > > > case TILING_4: > > > - return intel_gen_legacy(drm.devid) >= 12; > > > + return intel_gen(drm.fd) >= 12; > > > default: > > > igt_assert(false); > > > return false; > > > @@ -4475,7 +4475,7 @@ int igt_main_args("", long_options, help_str, opt_handler, NULL) > > > igt_require(igt_draw_supports_method(drm.fd, t.method)); > > > if (t.tiling == TILING_Y) { > > > - igt_require(intel_gen_legacy(drm.devid) >= 9); > > > + igt_require(intel_gen(drm.fd) >= 9); > > > igt_require(!intel_get_device_info(drm.devid)->has_4tile); > > > } > > > diff --git a/tests/intel/kms_pipe_stress.c b/tests/intel/kms_pipe_stress.c > > > index 1ae32d5fd..f8c994d07 100644 > > > --- a/tests/intel/kms_pipe_stress.c > > > +++ b/tests/intel/kms_pipe_stress.c > > > @@ -822,7 +822,7 @@ static void prepare_test(struct data *data) > > > create_framebuffers(data); > > > - if (intel_gen_legacy(intel_get_drm_devid(data->drm_fd)) > 9) > > > + if (intel_gen(data->drm_fd) > 9) > > > start_gpu_threads(data); > > > } > > > @@ -830,7 +830,7 @@ static void finish_test(struct data *data) > > > { > > > int i; > > > - if (intel_gen_legacy(intel_get_drm_devid(data->drm_fd)) > 9) > > > + if (intel_gen(data->drm_fd) > 9) > > > stop_gpu_threads(data); > > > /* > > > diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c > > > index 914144270..0ba8ae48c 100644 > > > --- a/tests/intel/xe_ccs.c > > > +++ b/tests/intel/xe_ccs.c > > > @@ -128,7 +128,7 @@ static void surf_copy(int xe, > > > int result; > > > igt_assert(mid->compression); > > > - if (intel_gen_legacy(devid) >= 20 && mid->compression) { > > > + if (intel_gen(xe) >= 20 && mid->compression) { > > > comp_pat_index = intel_get_pat_idx_uc_comp(xe); > > > cpu_caching = DRM_XE_GEM_CPU_CACHING_WC; > > > } > > > @@ -177,7 +177,7 @@ static void surf_copy(int xe, > > > if (IS_GEN(devid, 12) && is_intel_dgfx(xe)) { > > > igt_assert(!strcmp(orig, newsum)); > > > igt_assert(!strcmp(orig2, newsum2)); > > > - } else if (intel_gen_legacy(devid) >= 20) { > > > + } else if (intel_gen(xe) >= 20) { > > > if (is_intel_dgfx(xe)) { > > > /* buffer object would become > > > * uncompressed in xe2+ dgfx > > > @@ -227,7 +227,7 @@ static void surf_copy(int xe, > > > * uncompressed in xe2+ dgfx, and therefore retrieve the > > > * ccs by copying 0 to ccsmap > > > */ > > > - if (suspend_resume && intel_gen_legacy(devid) >= 20 && is_intel_dgfx(xe)) > > > + if (suspend_resume && intel_gen(xe) >= 20 && is_intel_dgfx(xe)) > > > memset(ccsmap, 0, ccssize); > > > else > > > /* retrieve back ccs */ > > > @@ -353,7 +353,7 @@ static void block_copy(int xe, > > > uint64_t bb_size = xe_bb_size(xe, SZ_4K); > > > uint64_t ahnd = intel_allocator_open(xe, ctx->vm, INTEL_ALLOCATOR_RELOC); > > > uint32_t run_id = mid_tiling; > > > - uint32_t mid_region = (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && > > > + uint32_t mid_region = (intel_gen(xe) >= 20 && > > > !xe_has_vram(xe)) ? region1 : region2; > > > uint32_t bb; > > > enum blt_compression mid_compression = config->compression; > > > @@ -441,7 +441,7 @@ static void block_copy(int xe, > > > if (config->inplace) { > > > uint8_t pat_index = DEFAULT_PAT_INDEX; > > > - if (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && config->compression) > > > + if (intel_gen(xe) >= 20 && config->compression) > > > pat_index = intel_get_pat_idx_uc_comp(xe); > > > blt_set_object(&blt.dst, mid->handle, dst->size, mid->region, 0, > > > @@ -488,7 +488,7 @@ static void block_multicopy(int xe, > > > uint64_t bb_size = xe_bb_size(xe, SZ_4K); > > > uint64_t ahnd = intel_allocator_open(xe, ctx->vm, INTEL_ALLOCATOR_RELOC); > > > uint32_t run_id = mid_tiling; > > > - uint32_t mid_region = (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && > > > + uint32_t mid_region = (intel_gen(xe) >= 20 && > > > !xe_has_vram(xe)) ? region1 : region2; > > > uint32_t bb; > > > enum blt_compression mid_compression = config->compression; > > > @@ -530,7 +530,7 @@ static void block_multicopy(int xe, > > > if (config->inplace) { > > > uint8_t pat_index = DEFAULT_PAT_INDEX; > > > - if (intel_gen_legacy(intel_get_drm_devid(xe)) >= 20 && config->compression) > > > + if (intel_gen(xe) >= 20 && config->compression) > > > pat_index = intel_get_pat_idx_uc_comp(xe); > > > blt_set_object(&blt3.dst, mid->handle, dst->size, mid->region, > > > @@ -715,7 +715,7 @@ static void block_copy_test(int xe, > > > int tiling, width, height; > > > - if (intel_gen_legacy(dev_id) >= 20 && config->compression) > > > + if (intel_gen(xe) >= 20 && config->compression) > > > igt_require(HAS_FLATCCS(dev_id)); > > > if (config->compression && !blt_block_copy_supports_compression(xe)) > > > diff --git a/tests/intel/xe_compute.c b/tests/intel/xe_compute.c > > > index 7b6c39c77..1cb86920f 100644 > > > --- a/tests/intel/xe_compute.c > > > +++ b/tests/intel/xe_compute.c > > > @@ -232,7 +232,7 @@ test_compute_kernel_loop(uint64_t loop_duration) > > > double elapse_time, lower_bound, upper_bound; > > > fd = drm_open_driver(DRIVER_XE); > > > - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > > > + ip_ver = intel_graphics_ver(fd); > > > kernels = intel_compute_square_kernels; > > > while (kernels->kernel) { > > > @@ -335,7 +335,7 @@ igt_check_supported_pipeline(void) > > > const struct intel_compute_kernels *kernels; > > > fd = drm_open_driver(DRIVER_XE); > > > - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > > > + ip_ver = intel_graphics_ver(fd); > > > kernels = intel_compute_square_kernels; > > > drm_close_driver(fd); > > > @@ -432,7 +432,7 @@ test_eu_busy(uint64_t duration_sec) > > > fd = drm_open_driver(DRIVER_XE); > > > - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(fd)); > > > + ip_ver = intel_graphics_ver(fd); > > > kernels = intel_compute_square_kernels; > > > while (kernels->kernel) { > > > if (ip_ver == kernels->ip_ver) > > > @@ -518,7 +518,7 @@ int igt_main() > > > igt_fixture() { > > > xe = drm_open_driver(DRIVER_XE); > > > sriov_enabled = is_sriov_mode(xe); > > > - ip_ver = intel_graphics_ver_legacy(intel_get_drm_devid(xe)); > > > + ip_ver = intel_graphics_ver(xe); > > > igt_store_ccs_mode(ccs_mode, ARRAY_SIZE(ccs_mode)); > > > } > > > diff --git a/tests/intel/xe_copy_basic.c b/tests/intel/xe_copy_basic.c > > > index 55081f938..e37bad746 100644 > > > --- a/tests/intel/xe_copy_basic.c > > > +++ b/tests/intel/xe_copy_basic.c > > > @@ -261,7 +261,6 @@ const char *help_str = > > > int igt_main_args("b", NULL, help_str, opt_handler, NULL) > > > { > > > int fd; > > > - uint16_t dev_id; > > > struct igt_collection *set, *regions; > > > uint32_t region; > > > struct rect linear[] = { { 0, 0xfd, 1, MODE_BYTE }, > > > @@ -275,7 +274,6 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) > > > igt_fixture() { > > > fd = drm_open_driver(DRIVER_XE); > > > - dev_id = intel_get_drm_devid(fd); > > > xe_device_get(fd); > > > set = xe_get_memory_region_set(fd, > > > DRM_XE_MEM_REGION_CLASS_SYSMEM, > > > @@ -295,7 +293,7 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) > > > for (int i = 0; i < ARRAY_SIZE(page); i++) { > > > igt_subtest_f("mem-page-copy-%u", page[i].width) { > > > igt_require(blt_has_mem_copy(fd)); > > > - igt_require(intel_get_device_info(dev_id)->graphics_ver >= 20); > > > + igt_require(intel_gen(fd) >= 20); > > > for_each_variation_r(regions, 1, set) { > > > region = igt_collection_get_value(regions, 0); > > > copy_test(fd, &page[i], MEM_COPY, region); > > > @@ -320,7 +318,7 @@ int igt_main_args("b", NULL, help_str, opt_handler, NULL) > > > * till 0x3FFFF. > > > */ > > > if (linear[i].width > 0x3ffff && > > > - (intel_get_device_info(dev_id)->graphics_ver < 20)) > > > + (intel_gen(fd) < 20)) > > > igt_skip("Skipping: width exceeds 18-bit limit on gfx_ver < 20\n"); > > > igt_require(blt_has_mem_set(fd)); > > > for_each_variation_r(regions, 1, set) { > > > diff --git a/tests/intel/xe_debugfs.c b/tests/intel/xe_debugfs.c > > > index facb55854..4075b173a 100644 > > > --- a/tests/intel/xe_debugfs.c > > > +++ b/tests/intel/xe_debugfs.c > > > @@ -296,7 +296,6 @@ static void test_tile_dir(struct xe_device *xe_dev, uint8_t tile) > > > */ > > > static void test_info_read(struct xe_device *xe_dev) > > > { > > > - uint16_t devid = intel_get_drm_devid(xe_dev->fd); > > > struct drm_xe_query_config *config; > > > const char *name = "info"; > > > bool failed = false; > > > @@ -329,7 +328,7 @@ static void test_info_read(struct xe_device *xe_dev) > > > failed = true; > > > } > > > - if (intel_gen_legacy(devid) < 20) { > > > + if (intel_gen(xe_dev->fd) < 20) { > > > val = -1; > > > switch (config->info[DRM_XE_QUERY_CONFIG_VA_BITS]) { > > > diff --git a/tests/intel/xe_eudebug_online.c b/tests/intel/xe_eudebug_online.c > > > index f64b12b3f..961cf5afc 100644 > > > --- a/tests/intel/xe_eudebug_online.c > > > +++ b/tests/intel/xe_eudebug_online.c > > > @@ -400,9 +400,7 @@ static uint64_t eu_ctl(int debugfd, uint64_t client, > > > static bool intel_gen_needs_resume_wa(int fd) > > > { > > > - const uint32_t id = intel_get_drm_devid(fd); > > > - > > > - return intel_gen_legacy(id) == 12 && intel_graphics_ver_legacy(id) < IP_VER(12, 55); > > > + return intel_gen(fd) == 12 && intel_graphics_ver(fd) < IP_VER(12, 55); > > > } > > > static uint64_t eu_ctl_resume(int fd, int debugfd, uint64_t client, > > > @@ -1222,8 +1220,6 @@ static void run_online_client(struct xe_eudebug_client *c) > > > static bool intel_gen_has_lockstep_eus(int fd) > > > { > > > - const uint32_t id = intel_get_drm_devid(fd); > > > - > > > /* > > > * Lockstep (or in some parlance, fused) EUs are pair of EUs > > > * that work in sync, supposedly same clock and same control flow. > > > @@ -1231,7 +1227,7 @@ static bool intel_gen_has_lockstep_eus(int fd) > > > * excepted into SIP. In this level, the hardware has only one attention > > > * thread bit for units. PVC is the first one without lockstepping. > > > */ > > > - return !(intel_graphics_ver_legacy(id) == IP_VER(12, 60) || intel_gen_legacy(id) >= 20); > > > + return !(intel_graphics_ver(fd) == IP_VER(12, 60) || intel_gen(fd) >= 20); > > > } > > > static int query_attention_bitmask_size(int fd, int gt) > > > diff --git a/tests/intel/xe_exec_multi_queue.c b/tests/intel/xe_exec_multi_queue.c > > > index 1d416efc9..bf09efcc3 100644 > > > --- a/tests/intel/xe_exec_multi_queue.c > > > +++ b/tests/intel/xe_exec_multi_queue.c > > > @@ -1047,7 +1047,7 @@ int igt_main() > > > igt_fixture() { > > > fd = drm_open_driver(DRIVER_XE); > > > - igt_require(intel_graphics_ver_legacy(intel_get_drm_devid(fd)) >= IP_VER(35, 0)); > > > + igt_require(intel_graphics_ver(fd) >= IP_VER(35, 0)); > > > } > > > igt_subtest_f("sanity") > > > diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c > > > index 498ab42b7..9e6a96aa8 100644 > > > --- a/tests/intel/xe_exec_store.c > > > +++ b/tests/intel/xe_exec_store.c > > > @@ -55,8 +55,7 @@ static void store_dword_batch(struct data *data, uint64_t addr, int value) > > > data->addr = batch_addr; > > > } > > > -static void cond_batch(struct data *data, uint64_t addr, int value, > > > - uint16_t dev_id) > > > +static void cond_batch(int fd, struct data *data, uint64_t addr, int value) > > > { > > > int b; > > > uint64_t batch_offset = (char *)&(data->batch) - (char *)data; > > > @@ -69,7 +68,7 @@ static void cond_batch(struct data *data, uint64_t addr, int value, > > > data->batch[b++] = sdi_addr; > > > data->batch[b++] = sdi_addr >> 32; > > > - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) > > > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) > > > data->batch[b++] = MI_MEM_FENCE | MI_WRITE_FENCE; > > > data->batch[b++] = MI_CONDITIONAL_BATCH_BUFFER_END | MI_DO_COMPARE | 5 << 12 | 2; > > > @@ -112,8 +111,7 @@ static void persistance_batch(struct data *data, uint64_t addr) > > > * SUBTEST: basic-all > > > * Description: Test to verify store dword on all available engines. > > > */ > > > -static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci, > > > - uint16_t dev_id) > > > +static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci) > > > { > > > struct drm_xe_sync sync[2] = { > > > { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, }, > > > @@ -156,7 +154,7 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc > > > else if (inst_type == COND_BATCH) { > > > /* A random value where it stops at the below value. */ > > > value = 20 + random() % 10; > > > - cond_batch(data, addr, value, dev_id); > > > + cond_batch(fd, data, addr, value); > > > } > > > else > > > igt_assert_f(inst_type < 2, "Entered wrong inst_type.\n"); > > > @@ -416,23 +414,21 @@ int igt_main() > > > { > > > struct drm_xe_engine_class_instance *hwe; > > > int fd; > > > - uint16_t dev_id; > > > struct drm_xe_engine *engine; > > > igt_fixture() { > > > fd = drm_open_driver(DRIVER_XE); > > > xe_device_get(fd); > > > - dev_id = intel_get_drm_devid(fd); > > > } > > > igt_subtest("basic-store") { > > > engine = xe_engine(fd, 1); > > > - basic_inst(fd, STORE, &engine->instance, dev_id); > > > + basic_inst(fd, STORE, &engine->instance); > > > } > > > igt_subtest("basic-cond-batch") { > > > engine = xe_engine(fd, 1); > > > - basic_inst(fd, COND_BATCH, &engine->instance, dev_id); > > > + basic_inst(fd, COND_BATCH, &engine->instance); > > > } > > > igt_subtest_with_dynamic("basic-all") { > > > @@ -441,7 +437,7 @@ int igt_main() > > > xe_engine_class_string(hwe->engine_class), > > > hwe->engine_instance, > > > hwe->gt_id); > > > - basic_inst(fd, STORE, hwe, dev_id); > > > + basic_inst(fd, STORE, hwe); > > > } > > > } > > > diff --git a/tests/intel/xe_fault_injection.c b/tests/intel/xe_fault_injection.c > > > index 8adc5c15a..57c5a5579 100644 > > > --- a/tests/intel/xe_fault_injection.c > > > +++ b/tests/intel/xe_fault_injection.c > > > @@ -486,12 +486,12 @@ vm_bind_fail(int fd, const char pci_slot[], const char function_name[]) > > > * @xe_oa_alloc_regs: xe_oa_alloc_regs > > > */ > > > static void > > > -oa_add_config_fail(int fd, int sysfs, int devid, > > > +oa_add_config_fail(int fd, int sysfs, > > > const char pci_slot[], const char function_name[]) > > > { > > > char path[512]; > > > uint64_t config_id; > > > -#define SAMPLE_MUX_REG (intel_graphics_ver_legacy(devid) >= IP_VER(20, 0) ? \ > > > +#define SAMPLE_MUX_REG (intel_graphics_ver(fd) >= IP_VER(20, 0) ? \ > > > 0x13000 /* PES* */ : 0x9888 /* NOA_WRITE */) > > > uint32_t mux_regs[] = { SAMPLE_MUX_REG, 0x0 }; > > > @@ -557,7 +557,6 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) > > > int fd, sysfs; > > > struct drm_xe_engine_class_instance *hwe; > > > struct fault_injection_params fault_params; > > > - static uint32_t devid; > > > char pci_slot[NAME_MAX]; > > > bool is_vf_device; > > > const struct section { > > > @@ -627,7 +626,6 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) > > > igt_fixture() { > > > igt_require(fail_function_injection_enabled()); > > > fd = drm_open_driver(DRIVER_XE); > > > - devid = intel_get_drm_devid(fd); > > > sysfs = igt_sysfs_open(fd); > > > igt_device_get_pci_slot_name(fd, pci_slot); > > > setup_injection_fault(&default_fault_params); > > > @@ -659,7 +657,7 @@ int igt_main_args("I:", NULL, help_str, opt_handler, NULL) > > > for (const struct section *s = oa_add_config_fail_functions; s->name; s++) > > > igt_subtest_f("oa-add-config-fail-%s", s->name) > > > - oa_add_config_fail(fd, sysfs, devid, pci_slot, s->name); > > > + oa_add_config_fail(fd, sysfs, pci_slot, s->name); > > > igt_fixture() { > > > igt_kmod_unbind("xe", pci_slot); > > > diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c > > > index 5c112351f..e37d00d2c 100644 > > > --- a/tests/intel/xe_intel_bb.c > > > +++ b/tests/intel/xe_intel_bb.c > > > @@ -710,7 +710,7 @@ static void do_intel_bb_blit(struct buf_ops *bops, int loops, uint32_t tiling) > > > int i, fails = 0, xe = buf_ops_get_fd(bops); > > > /* We'll fix it for gen2/3 later. */ > > > - igt_require(intel_gen_legacy(intel_get_drm_devid(xe)) > 3); > > > + igt_require(intel_gen(xe) > 3); > > > for (i = 0; i < loops; i++) > > > fails += __do_intel_bb_blit(bops, tiling); > > > @@ -878,10 +878,9 @@ static int render(struct buf_ops *bops, uint32_t tiling, > > > int xe = buf_ops_get_fd(bops); > > > uint32_t fails = 0; > > > char name[128]; > > > - uint32_t devid = intel_get_drm_devid(xe); > > > igt_render_copyfunc_t render_copy = NULL; > > > - igt_debug("%s() gen: %d\n", __func__, intel_gen_legacy(devid)); > > > + igt_debug("%s() gen: %d\n", __func__, intel_gen(xe)); > > > ibb = intel_bb_create(xe, PAGE_SIZE); > > > @@ -1041,7 +1040,7 @@ int igt_main_args("dpib", NULL, help_str, opt_handler, NULL) > > > do_intel_bb_blit(bops, 3, I915_TILING_X); > > > igt_subtest("intel-bb-blit-y") { > > > - igt_require(intel_gen_legacy(intel_get_drm_devid(xe)) >= 6); > > > + igt_require(intel_gen(xe) >= 6); > > > do_intel_bb_blit(bops, 3, I915_TILING_Y); > > > } > > > diff --git a/tests/intel/xe_multigpu_svm.c b/tests/intel/xe_multigpu_svm.c > > > index ab800476e..2c6f81a10 100644 > > > --- a/tests/intel/xe_multigpu_svm.c > > > +++ b/tests/intel/xe_multigpu_svm.c > > > @@ -396,7 +396,6 @@ static void batch_init(int fd, uint32_t vm, uint64_t src_addr, > > > uint64_t batch_addr; > > > void *batch; > > > uint32_t *cmd; > > > - uint16_t dev_id = intel_get_drm_devid(fd); > > > uint32_t mocs_index = intel_get_uc_mocs_index(fd); > > > int i = 0; > > > @@ -412,7 +411,7 @@ static void batch_init(int fd, uint32_t vm, uint64_t src_addr, > > > cmd[i++] = upper_32_bits(src_addr); > > > cmd[i++] = lower_32_bits(dst_addr); > > > cmd[i++] = upper_32_bits(dst_addr); > > > - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) { > > > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { > > > cmd[i++] = mocs_index << XE2_MEM_COPY_SRC_MOCS_SHIFT | mocs_index; > > > } else { > > > cmd[i++] = mocs_index << GEN12_MEM_COPY_MOCS_SHIFT | mocs_index; > > > diff --git a/tests/intel/xe_pat.c b/tests/intel/xe_pat.c > > > index 96302ad3a..96d544160 100644 > > > --- a/tests/intel/xe_pat.c > > > +++ b/tests/intel/xe_pat.c > > > @@ -119,14 +119,13 @@ static int xe_fetch_pat_sw_config(int fd, struct intel_pat_cache *pat_sw_config) > > > */ > > > static void pat_sanity(int fd) > > > { > > > - uint16_t dev_id = intel_get_drm_devid(fd); > > > struct intel_pat_cache pat_sw_config = {}; > > > int32_t parsed; > > > bool has_uc_comp = false, has_wt = false; > > > parsed = xe_fetch_pat_sw_config(fd, &pat_sw_config); > > > - if (intel_graphics_ver_legacy(dev_id) >= IP_VER(20, 0)) { > > > + if (intel_graphics_ver(fd) >= IP_VER(20, 0)) { > > > for (int i = 0; i < parsed; i++) { > > > uint32_t pat = pat_sw_config.entries[i].pat; > > > if (pat_sw_config.entries[i].rsvd) > > > @@ -898,7 +897,6 @@ static void display_vs_wb_transient(int fd) > > > 3, /* UC (baseline) */ > > > 6, /* L3:XD (uncompressed) */ > > > }; > > > - uint32_t devid = intel_get_drm_devid(fd); > > > igt_render_copyfunc_t render_copy = NULL; > > > igt_crc_t ref_crc = {}, crc = {}; > > > igt_plane_t *primary; > > > @@ -914,7 +912,7 @@ static void display_vs_wb_transient(int fd) > > > int bpp = 32; > > > int i; > > > - igt_require(intel_get_device_info(devid)->graphics_ver >= 20); > > > + igt_require(intel_gen(fd) >= 20); > > > render_copy = igt_get_render_copyfunc(fd); > > > igt_require(render_copy); > > > @@ -1015,10 +1013,8 @@ static uint8_t get_pat_idx_uc(int fd, bool *compressed) > > > static uint8_t get_pat_idx_wt(int fd, bool *compressed) > > > { > > > - uint16_t dev_id = intel_get_drm_devid(fd); > > > - > > > if (compressed) > > > - *compressed = intel_get_device_info(dev_id)->graphics_ver >= 20; > > > + *compressed = intel_gen(fd) >= 20; > > > return intel_get_pat_idx_wt(fd); > > > } > > > @@ -1328,7 +1324,7 @@ int igt_main_args("V", NULL, help_str, opt_handler, NULL) > > > bo_comp_disable_bind(fd); > > > igt_subtest_with_dynamic("pat-index-xelp") { > > > - igt_require(intel_graphics_ver_legacy(dev_id) <= IP_VER(12, 55)); > > > + igt_require(intel_graphics_ver(fd) <= IP_VER(12, 55)); > > > subtest_pat_index_modes_with_regions(fd, xelp_pat_index_modes, > > > ARRAY_SIZE(xelp_pat_index_modes)); > > > } > > > @@ -1346,10 +1342,10 @@ int igt_main_args("V", NULL, help_str, opt_handler, NULL) > > > } > > > igt_subtest_with_dynamic("pat-index-xe2") { > > > - igt_require(intel_get_device_info(dev_id)->graphics_ver >= 20); > > > + igt_require(intel_gen(fd) >= 20); > > > igt_assert(HAS_FLATCCS(dev_id)); > > > - if (intel_graphics_ver_legacy(dev_id) == IP_VER(20, 1)) > > > + if (intel_graphics_ver(fd) == IP_VER(20, 1)) > > > subtest_pat_index_modes_with_regions(fd, bmg_g21_pat_index_modes, > > > ARRAY_SIZE(bmg_g21_pat_index_modes)); > > > else > > > diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c > > > index 318a9994a..ae505a5d7 100644 > > > --- a/tests/intel/xe_query.c > > > +++ b/tests/intel/xe_query.c > > > @@ -380,7 +380,7 @@ test_query_gt_topology(int fd) > > > } > > > /* sanity check EU type */ > > > - if (IS_PONTEVECCHIO(dev_id) || intel_gen_legacy(dev_id) >= 20) { > > > + if (IS_PONTEVECCHIO(dev_id) || intel_gen(fd) >= 20) { > > > igt_assert(topo_types & (1 << DRM_XE_TOPO_SIMD16_EU_PER_DSS)); > > > igt_assert_eq(topo_types & (1 << DRM_XE_TOPO_EU_PER_DSS), 0); > > > } else { > > > @@ -428,7 +428,7 @@ test_query_gt_topology_l3_bank_mask(int fd) > > > } > > > igt_info(" count: %d\n", count); > > > - if (intel_get_device_info(dev_id)->graphics_ver < 20) { > > > + if (intel_gen(fd) < 20) { > > > igt_assert_lt(0, count); > > > } > > > diff --git a/tests/intel/xe_render_copy.c b/tests/intel/xe_render_copy.c > > > index 0a6ae9ca2..a3976b5f1 100644 > > > --- a/tests/intel/xe_render_copy.c > > > +++ b/tests/intel/xe_render_copy.c > > > @@ -136,7 +136,7 @@ static int compare_bufs(struct intel_buf *buf1, struct intel_buf *buf2, > > > static bool buf_is_aux_compressed(struct buf_ops *bops, struct intel_buf *buf) > > > { > > > int xe = buf_ops_get_fd(bops); > > > - unsigned int gen = intel_gen_legacy(buf_ops_get_devid(bops)); > > > + unsigned int gen = intel_gen(buf_ops_get_fd(bops)); > > > uint32_t ccs_size; > > > uint8_t *ptr; > > > bool is_compressed = false; > > > -- > > > 2.43.0 > > > -- Matt Roper Graphics Software Engineer Linux GPU Platform Enablement Intel Corporation ^ permalink raw reply [flat|nested] 12+ messages in thread
* ✓ i915.CI.BAT: success for lib/intel: switch graphics/IP version queries to fd-based APIs (rev2) 2026-01-22 7:15 [PATCH v2 0/3] lib/intel: switch graphics/IP version queries to fd-based APIs Xin Wang ` (2 preceding siblings ...) 2026-01-22 7:15 ` [PATCH v2 3/3] intel/xe: use fd-based graphics/IP version helpers Xin Wang @ 2026-01-22 8:01 ` Patchwork 2026-01-22 8:04 ` ✓ Xe.CI.BAT: " Patchwork 2026-01-22 18:13 ` ✗ Xe.CI.Full: failure " Patchwork 5 siblings, 0 replies; 12+ messages in thread From: Patchwork @ 2026-01-22 8:01 UTC (permalink / raw) To: Xin Wang; +Cc: igt-dev [-- Attachment #1: Type: text/plain, Size: 6542 bytes --] == Series Details == Series: lib/intel: switch graphics/IP version queries to fd-based APIs (rev2) URL : https://patchwork.freedesktop.org/series/160271/ State : success == Summary == CI Bug Log - changes from IGT_8711 -> IGTPW_14394 ==================================================== Summary ------- **SUCCESS** No regressions found. External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/index.html Participating hosts (42 -> 41) ------------------------------ Additional (1): bat-adls-6 Missing (2): bat-dg2-13 fi-snb-2520m Known issues ------------ Here are the changes found in IGTPW_14394 that come from known issues: ### IGT changes ### #### Issues hit #### * igt@gem_lmem_swapping@parallel-random-engines: - bat-adls-6: NOTRUN -> [SKIP][1] ([i915#4613]) +3 other tests skip [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-adls-6/igt@gem_lmem_swapping@parallel-random-engines.html * igt@gem_tiled_pread_basic: - bat-adls-6: NOTRUN -> [SKIP][2] ([i915#3282]) [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-adls-6/igt@gem_tiled_pread_basic.html * igt@i915_selftest@live: - bat-mtlp-8: [PASS][3] -> [DMESG-FAIL][4] ([i915#12061]) +1 other test dmesg-fail [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8711/bat-mtlp-8/igt@i915_selftest@live.html [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-mtlp-8/igt@i915_selftest@live.html * igt@i915_selftest@live@workarounds: - bat-arlh-3: [PASS][5] -> [DMESG-FAIL][6] ([i915#12061]) +1 other test dmesg-fail [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8711/bat-arlh-3/igt@i915_selftest@live@workarounds.html [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-arlh-3/igt@i915_selftest@live@workarounds.html - bat-dg2-14: [PASS][7] -> [DMESG-FAIL][8] ([i915#12061]) +1 other test dmesg-fail [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8711/bat-dg2-14/igt@i915_selftest@live@workarounds.html [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-dg2-14/igt@i915_selftest@live@workarounds.html - bat-arls-6: [PASS][9] -> [DMESG-FAIL][10] ([i915#12061]) +1 other test dmesg-fail [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8711/bat-arls-6/igt@i915_selftest@live@workarounds.html [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-arls-6/igt@i915_selftest@live@workarounds.html * igt@intel_hwmon@hwmon-read: - bat-adls-6: NOTRUN -> [SKIP][11] ([i915#7707]) +1 other test skip [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-adls-6/igt@intel_hwmon@hwmon-read.html * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy: - bat-adls-6: NOTRUN -> [SKIP][12] ([i915#4103]) +1 other test skip [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-adls-6/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy.html * igt@kms_dsc@dsc-basic: - bat-adls-6: NOTRUN -> [SKIP][13] ([i915#3555] / [i915#3840]) [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-adls-6/igt@kms_dsc@dsc-basic.html * igt@kms_force_connector_basic@force-load-detect: - bat-adls-6: NOTRUN -> [SKIP][14] [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-adls-6/igt@kms_force_connector_basic@force-load-detect.html * igt@kms_pm_backlight@basic-brightness: - bat-adls-6: NOTRUN -> [SKIP][15] ([i915#5354]) [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-adls-6/igt@kms_pm_backlight@basic-brightness.html * igt@kms_psr@psr-primary-mmap-gtt: - bat-adls-6: NOTRUN -> [SKIP][16] ([i915#1072] / [i915#9732]) +3 other tests skip [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-adls-6/igt@kms_psr@psr-primary-mmap-gtt.html * igt@kms_setmode@basic-clone-single-crtc: - bat-adls-6: NOTRUN -> [SKIP][17] ([i915#3555]) [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-adls-6/igt@kms_setmode@basic-clone-single-crtc.html * igt@prime_vgem@basic-fence-read: - bat-adls-6: NOTRUN -> [SKIP][18] ([i915#3291]) +2 other tests skip [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-adls-6/igt@prime_vgem@basic-fence-read.html #### Possible fixes #### * igt@i915_selftest@live@workarounds: - bat-arls-5: [DMESG-FAIL][19] ([i915#12061]) -> [PASS][20] +1 other test pass [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8711/bat-arls-5/igt@i915_selftest@live@workarounds.html [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-arls-5/igt@i915_selftest@live@workarounds.html - bat-mtlp-9: [DMESG-FAIL][21] ([i915#12061]) -> [PASS][22] +1 other test pass [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8711/bat-mtlp-9/igt@i915_selftest@live@workarounds.html [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/bat-mtlp-9/igt@i915_selftest@live@workarounds.html [i915#1072]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1072 [i915#12061]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12061 [i915#3282]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3282 [i915#3291]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3291 [i915#3555]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3555 [i915#3840]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3840 [i915#4103]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4103 [i915#4613]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4613 [i915#5354]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5354 [i915#7707]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7707 [i915#9732]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9732 Build changes ------------- * CI: CI-20190529 -> None * IGT: IGT_8711 -> IGTPW_14394 * Linux: CI_DRM_17867 -> CI_DRM_17868 CI-20190529: 20190529 CI_DRM_17867: ad2a046603cba140214aed34015ed5027441e85a @ git://anongit.freedesktop.org/gfx-ci/linux CI_DRM_17868: 40800011414446888105f6beae6dd3fac56516aa @ git://anongit.freedesktop.org/gfx-ci/linux IGTPW_14394: d48f7c6331b0ce4d380643b336f9c81e49aa8b7a @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git IGT_8711: 38428617bae65b39b306f79217ac922ebee3b477 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_14394/index.html [-- Attachment #2: Type: text/html, Size: 7939 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* ✓ Xe.CI.BAT: success for lib/intel: switch graphics/IP version queries to fd-based APIs (rev2) 2026-01-22 7:15 [PATCH v2 0/3] lib/intel: switch graphics/IP version queries to fd-based APIs Xin Wang ` (3 preceding siblings ...) 2026-01-22 8:01 ` ✓ i915.CI.BAT: success for lib/intel: switch graphics/IP version queries to fd-based APIs (rev2) Patchwork @ 2026-01-22 8:04 ` Patchwork 2026-01-22 18:13 ` ✗ Xe.CI.Full: failure " Patchwork 5 siblings, 0 replies; 12+ messages in thread From: Patchwork @ 2026-01-22 8:04 UTC (permalink / raw) To: Xin Wang; +Cc: igt-dev [-- Attachment #1: Type: text/plain, Size: 4934 bytes --] == Series Details == Series: lib/intel: switch graphics/IP version queries to fd-based APIs (rev2) URL : https://patchwork.freedesktop.org/series/160271/ State : success == Summary == CI Bug Log - changes from XEIGT_8711_BAT -> XEIGTPW_14394_BAT ==================================================== Summary ------- **WARNING** Minor unknown changes coming with XEIGTPW_14394_BAT need to be verified manually. If you think the reported changes have nothing to do with the changes introduced in XEIGTPW_14394_BAT, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them to document this new failure mode, which will reduce false positives in CI. Participating hosts (12 -> 12) ------------------------------ No changes in participating hosts Possible new issues ------------------- Here are the unknown changes that may have been introduced in XEIGTPW_14394_BAT: ### IGT changes ### #### Warnings #### * igt@xe_pat@pat-index-xe2: - bat-adlp-vm: [SKIP][1] ([Intel XE#977]) -> [SKIP][2] [1]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/bat-adlp-vm/igt@xe_pat@pat-index-xe2.html [2]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/bat-adlp-vm/igt@xe_pat@pat-index-xe2.html - bat-atsm-2: [SKIP][3] ([Intel XE#977]) -> [SKIP][4] [3]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/bat-atsm-2/igt@xe_pat@pat-index-xe2.html [4]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/bat-atsm-2/igt@xe_pat@pat-index-xe2.html - bat-adlp-7: [SKIP][5] ([Intel XE#977]) -> [SKIP][6] [5]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/bat-adlp-7/igt@xe_pat@pat-index-xe2.html [6]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/bat-adlp-7/igt@xe_pat@pat-index-xe2.html - bat-dg2-oem2: [SKIP][7] ([Intel XE#977]) -> [SKIP][8] [7]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/bat-dg2-oem2/igt@xe_pat@pat-index-xe2.html [8]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/bat-dg2-oem2/igt@xe_pat@pat-index-xe2.html * igt@xe_pat@pat-index-xelp: - bat-bmg-2: [SKIP][9] ([Intel XE#2245]) -> [SKIP][10] [9]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/bat-bmg-2/igt@xe_pat@pat-index-xelp.html [10]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/bat-bmg-2/igt@xe_pat@pat-index-xelp.html - bat-bmg-1: [SKIP][11] ([Intel XE#2245]) -> [SKIP][12] [11]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/bat-bmg-1/igt@xe_pat@pat-index-xelp.html [12]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/bat-bmg-1/igt@xe_pat@pat-index-xelp.html - bat-lnl-2: [SKIP][13] ([Intel XE#977]) -> [SKIP][14] [13]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/bat-lnl-2/igt@xe_pat@pat-index-xelp.html [14]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/bat-lnl-2/igt@xe_pat@pat-index-xelp.html - bat-ptl-2: [SKIP][15] ([Intel XE#5771]) -> [SKIP][16] [15]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/bat-ptl-2/igt@xe_pat@pat-index-xelp.html [16]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/bat-ptl-2/igt@xe_pat@pat-index-xelp.html - bat-ptl-1: [SKIP][17] ([Intel XE#5771]) -> [SKIP][18] [17]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/bat-ptl-1/igt@xe_pat@pat-index-xelp.html [18]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/bat-ptl-1/igt@xe_pat@pat-index-xelp.html - bat-ptl-vm: [SKIP][19] ([Intel XE#5771]) -> [SKIP][20] [19]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/bat-ptl-vm/igt@xe_pat@pat-index-xelp.html [20]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/bat-ptl-vm/igt@xe_pat@pat-index-xelp.html - bat-lnl-1: [SKIP][21] ([Intel XE#977]) -> [SKIP][22] [21]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/bat-lnl-1/igt@xe_pat@pat-index-xelp.html [22]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/bat-lnl-1/igt@xe_pat@pat-index-xelp.html [Intel XE#2245]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2245 [Intel XE#5771]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5771 [Intel XE#977]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/977 Build changes ------------- * IGT: IGT_8711 -> IGTPW_14394 * Linux: xe-4432-ad2a046603cba140214aed34015ed5027441e85a -> xe-4433-40800011414446888105f6beae6dd3fac56516aa IGTPW_14394: d48f7c6331b0ce4d380643b336f9c81e49aa8b7a @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git IGT_8711: 38428617bae65b39b306f79217ac922ebee3b477 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git xe-4432-ad2a046603cba140214aed34015ed5027441e85a: ad2a046603cba140214aed34015ed5027441e85a xe-4433-40800011414446888105f6beae6dd3fac56516aa: 40800011414446888105f6beae6dd3fac56516aa == Logs == For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/index.html [-- Attachment #2: Type: text/html, Size: 6144 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* ✗ Xe.CI.Full: failure for lib/intel: switch graphics/IP version queries to fd-based APIs (rev2) 2026-01-22 7:15 [PATCH v2 0/3] lib/intel: switch graphics/IP version queries to fd-based APIs Xin Wang ` (4 preceding siblings ...) 2026-01-22 8:04 ` ✓ Xe.CI.BAT: " Patchwork @ 2026-01-22 18:13 ` Patchwork 5 siblings, 0 replies; 12+ messages in thread From: Patchwork @ 2026-01-22 18:13 UTC (permalink / raw) To: Xin Wang; +Cc: igt-dev [-- Attachment #1: Type: text/plain, Size: 45875 bytes --] == Series Details == Series: lib/intel: switch graphics/IP version queries to fd-based APIs (rev2) URL : https://patchwork.freedesktop.org/series/160271/ State : failure == Summary == CI Bug Log - changes from XEIGT_8711_FULL -> XEIGTPW_14394_FULL ==================================================== Summary ------- **FAILURE** Serious unknown changes coming with XEIGTPW_14394_FULL absolutely need to be verified manually. If you think the reported changes have nothing to do with the changes introduced in XEIGTPW_14394_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them to document this new failure mode, which will reduce false positives in CI. Participating hosts (2 -> 2) ------------------------------ No changes in participating hosts Possible new issues ------------------- Here are the unknown changes that may have been introduced in XEIGTPW_14394_FULL: ### IGT changes ### #### Possible regressions #### * igt@kms_ccs@bad-pixel-format-4-tiled-dg2-mc-ccs: - shard-bmg: NOTRUN -> [SKIP][1] +21 other tests skip [1]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-3/igt@kms_ccs@bad-pixel-format-4-tiled-dg2-mc-ccs.html * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-dp2-hdmi-a3: - shard-bmg: [PASS][2] -> [FAIL][3] [2]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-10/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-dp2-hdmi-a3.html [3]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-1/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-dp2-hdmi-a3.html * igt@xe_exec_multi_queue@max-queues-preempt-mode-dyn-priority-smem: - shard-lnl: NOTRUN -> [SKIP][4] +29 other tests skip [4]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@xe_exec_multi_queue@max-queues-preempt-mode-dyn-priority-smem.html #### Warnings #### * igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc: - shard-lnl: [SKIP][5] ([Intel XE#2887]) -> [SKIP][6] +77 other tests skip [5]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-8/igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc.html [6]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-3/igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc.html * igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs: - shard-bmg: [SKIP][7] ([Intel XE#3432]) -> [SKIP][8] +9 other tests skip [7]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-3/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs.html [8]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-10/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs.html * igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs: - shard-lnl: [SKIP][9] ([Intel XE#3432]) -> [SKIP][10] +8 other tests skip [9]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-3/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html [10]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs: - shard-bmg: [SKIP][11] ([Intel XE#2887]) -> [SKIP][12] +81 other tests skip [11]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-4/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html [12]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-1/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html * igt@xe_exec_multi_queue@many-queues-preempt-mode-userptr: - shard-lnl: [SKIP][13] ([Intel XE#6874]) -> [SKIP][14] +156 other tests skip [13]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-5/igt@xe_exec_multi_queue@many-queues-preempt-mode-userptr.html [14]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@xe_exec_multi_queue@many-queues-preempt-mode-userptr.html * igt@xe_exec_multi_queue@two-queues-priority: - shard-bmg: [SKIP][15] ([Intel XE#6874]) -> [SKIP][16] +157 other tests skip [15]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-8/igt@xe_exec_multi_queue@two-queues-priority.html [16]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-2/igt@xe_exec_multi_queue@two-queues-priority.html * igt@xe_oa@oa-tlb-invalidate: - shard-lnl: [SKIP][17] ([Intel XE#2248]) -> [SKIP][18] [17]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-3/igt@xe_oa@oa-tlb-invalidate.html [18]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@xe_oa@oa-tlb-invalidate.html - shard-bmg: [SKIP][19] ([Intel XE#2248]) -> [SKIP][20] [19]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-1/igt@xe_oa@oa-tlb-invalidate.html [20]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-8/igt@xe_oa@oa-tlb-invalidate.html * igt@xe_pat@pat-index-xelp: - shard-lnl: [SKIP][21] ([Intel XE#977]) -> [SKIP][22] [21]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-3/igt@xe_pat@pat-index-xelp.html [22]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-3/igt@xe_pat@pat-index-xelp.html - shard-bmg: [SKIP][23] ([Intel XE#2245]) -> [SKIP][24] [23]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-8/igt@xe_pat@pat-index-xelp.html [24]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-3/igt@xe_pat@pat-index-xelp.html Known issues ------------ Here are the changes found in XEIGTPW_14394_FULL that come from known issues: ### IGT changes ### #### Issues hit #### * igt@kms_big_fb@4-tiled-16bpp-rotate-270: - shard-lnl: NOTRUN -> [SKIP][25] ([Intel XE#1407]) +1 other test skip [25]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@kms_big_fb@4-tiled-16bpp-rotate-270.html * igt@kms_big_fb@4-tiled-64bpp-rotate-270: - shard-bmg: NOTRUN -> [SKIP][26] ([Intel XE#2327]) [26]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-7/igt@kms_big_fb@4-tiled-64bpp-rotate-270.html * igt@kms_big_fb@yf-tiled-8bpp-rotate-0: - shard-lnl: NOTRUN -> [SKIP][27] ([Intel XE#1124]) +3 other tests skip [27]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@kms_big_fb@yf-tiled-8bpp-rotate-0.html * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip: - shard-bmg: NOTRUN -> [SKIP][28] ([Intel XE#1124]) +2 other tests skip [28]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-7/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip.html * igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p: - shard-bmg: NOTRUN -> [SKIP][29] ([Intel XE#2314] / [Intel XE#2894]) [29]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-2/igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p.html * igt@kms_bw@linear-tiling-2-displays-2560x1440p: - shard-lnl: NOTRUN -> [SKIP][30] ([Intel XE#367]) [30]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@kms_bw@linear-tiling-2-displays-2560x1440p.html * igt@kms_bw@linear-tiling-4-displays-2560x1440p: - shard-bmg: NOTRUN -> [SKIP][31] ([Intel XE#367]) [31]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-4/igt@kms_bw@linear-tiling-4-displays-2560x1440p.html * igt@kms_ccs@crc-primary-rotation-180-4-tiled-lnl-ccs@pipe-c-dp-2: - shard-bmg: NOTRUN -> [SKIP][32] ([Intel XE#2652] / [Intel XE#787]) +8 other tests skip [32]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-9/igt@kms_ccs@crc-primary-rotation-180-4-tiled-lnl-ccs@pipe-c-dp-2.html * igt@kms_chamelium_color@ctm-red-to-blue: - shard-bmg: NOTRUN -> [SKIP][33] ([Intel XE#2325]) [33]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-10/igt@kms_chamelium_color@ctm-red-to-blue.html * igt@kms_chamelium_color@degamma: - shard-lnl: NOTRUN -> [SKIP][34] ([Intel XE#306]) [34]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@kms_chamelium_color@degamma.html * igt@kms_chamelium_edid@dp-edid-resolution-list: - shard-bmg: NOTRUN -> [SKIP][35] ([Intel XE#2252]) +4 other tests skip [35]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-9/igt@kms_chamelium_edid@dp-edid-resolution-list.html * igt@kms_chamelium_hpd@dp-hpd-enable-disable-mode: - shard-lnl: NOTRUN -> [SKIP][36] ([Intel XE#373]) +8 other tests skip [36]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@kms_chamelium_hpd@dp-hpd-enable-disable-mode.html * igt@kms_content_protection@atomic: - shard-bmg: NOTRUN -> [FAIL][37] ([Intel XE#1178] / [Intel XE#3304]) +1 other test fail [37]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-7/igt@kms_content_protection@atomic.html - shard-lnl: NOTRUN -> [SKIP][38] ([Intel XE#3278]) [38]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-5/igt@kms_content_protection@atomic.html * igt@kms_content_protection@atomic-hdcp14: - shard-bmg: NOTRUN -> [FAIL][39] ([Intel XE#3304]) +1 other test fail [39]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-3/igt@kms_content_protection@atomic-hdcp14.html * igt@kms_content_protection@dp-mst-lic-type-0: - shard-lnl: NOTRUN -> [SKIP][40] ([Intel XE#307] / [Intel XE#6974]) [40]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-1/igt@kms_content_protection@dp-mst-lic-type-0.html * igt@kms_cursor_crc@cursor-offscreen-512x512: - shard-bmg: NOTRUN -> [SKIP][41] ([Intel XE#2321]) [41]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-9/igt@kms_cursor_crc@cursor-offscreen-512x512.html * igt@kms_cursor_crc@cursor-rapid-movement-128x42: - shard-bmg: NOTRUN -> [SKIP][42] ([Intel XE#2320]) [42]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-10/igt@kms_cursor_crc@cursor-rapid-movement-128x42.html * igt@kms_cursor_crc@cursor-sliding-64x21: - shard-lnl: NOTRUN -> [SKIP][43] ([Intel XE#1424]) +2 other tests skip [43]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@kms_cursor_crc@cursor-sliding-64x21.html * igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size: - shard-lnl: NOTRUN -> [SKIP][44] ([Intel XE#309]) +2 other tests skip [44]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-5/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions: - shard-bmg: [PASS][45] -> [FAIL][46] ([Intel XE#5299]) [45]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-2/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html [46]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-4/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html * igt@kms_cursor_legacy@flip-vs-cursor-legacy: - shard-bmg: NOTRUN -> [FAIL][47] ([Intel XE#5299]) [47]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-1/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html * igt@kms_dp_link_training@non-uhbr-sst: - shard-lnl: NOTRUN -> [SKIP][48] ([Intel XE#4354]) [48]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-3/igt@kms_dp_link_training@non-uhbr-sst.html * igt@kms_feature_discovery@chamelium: - shard-bmg: NOTRUN -> [SKIP][49] ([Intel XE#2372]) [49]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-7/igt@kms_feature_discovery@chamelium.html - shard-lnl: NOTRUN -> [SKIP][50] ([Intel XE#701]) [50]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-7/igt@kms_feature_discovery@chamelium.html * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible: - shard-bmg: [PASS][51] -> [FAIL][52] ([Intel XE#7030]) [51]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-10/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible.html [52]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-1/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible.html * igt@kms_flip@2x-wf_vblank-ts-check-interruptible: - shard-lnl: NOTRUN -> [SKIP][53] ([Intel XE#1421]) +3 other tests skip [53]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-1/igt@kms_flip@2x-wf_vblank-ts-check-interruptible.html * igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-64bpp-4tile-downscaling: - shard-lnl: NOTRUN -> [SKIP][54] ([Intel XE#1397] / [Intel XE#1745]) [54]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-64bpp-4tile-downscaling.html * igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-64bpp-4tile-downscaling@pipe-a-default-mode: - shard-lnl: NOTRUN -> [SKIP][55] ([Intel XE#1397]) [55]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-64bpp-4tile-downscaling@pipe-a-default-mode.html * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-upscaling: - shard-bmg: NOTRUN -> [SKIP][56] ([Intel XE#2293] / [Intel XE#2380]) +1 other test skip [56]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-2/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-upscaling.html * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling: - shard-lnl: NOTRUN -> [SKIP][57] ([Intel XE#1401] / [Intel XE#1745]) +2 other tests skip [57]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-5/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-default-mode: - shard-lnl: NOTRUN -> [SKIP][58] ([Intel XE#1401]) +2 other tests skip [58]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-5/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-default-mode.html * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-valid-mode: - shard-bmg: NOTRUN -> [SKIP][59] ([Intel XE#2293]) +1 other test skip [59]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-3/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-valid-mode.html * igt@kms_frontbuffer_tracking@drrs-1p-offscreen-pri-indfb-draw-mmap-wc: - shard-lnl: NOTRUN -> [SKIP][60] ([Intel XE#6312]) +1 other test skip [60]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-2/igt@kms_frontbuffer_tracking@drrs-1p-offscreen-pri-indfb-draw-mmap-wc.html * igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc: - shard-bmg: NOTRUN -> [SKIP][61] ([Intel XE#2311]) +10 other tests skip [61]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-4/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc.html * igt@kms_frontbuffer_tracking@drrs-argb161616f-draw-render: - shard-lnl: NOTRUN -> [SKIP][62] ([Intel XE#7061]) +1 other test skip [62]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-1/igt@kms_frontbuffer_tracking@drrs-argb161616f-draw-render.html * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-mmap-wc: - shard-bmg: NOTRUN -> [SKIP][63] ([Intel XE#4141]) +7 other tests skip [63]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-mmap-wc.html * igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-pri-indfb-draw-blt: - shard-lnl: NOTRUN -> [SKIP][64] ([Intel XE#651]) +4 other tests skip [64]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-7/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-pri-indfb-draw-blt.html * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-indfb-msflip-blt: - shard-lnl: NOTRUN -> [SKIP][65] ([Intel XE#656]) +29 other tests skip [65]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-5/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-indfb-msflip-blt.html * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-spr-indfb-draw-mmap-wc: - shard-bmg: NOTRUN -> [SKIP][66] ([Intel XE#2313]) +17 other tests skip [66]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-1/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-spr-indfb-draw-mmap-wc.html * igt@kms_frontbuffer_tracking@psr-argb161616f-draw-mmap-wc: - shard-bmg: NOTRUN -> [SKIP][67] ([Intel XE#7061]) +1 other test skip [67]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-2/igt@kms_frontbuffer_tracking@psr-argb161616f-draw-mmap-wc.html * igt@kms_hdmi_inject@inject-audio: - shard-lnl: NOTRUN -> [SKIP][68] ([Intel XE#1470] / [Intel XE#2853]) [68]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-1/igt@kms_hdmi_inject@inject-audio.html * igt@kms_joiner@invalid-modeset-ultra-joiner: - shard-lnl: NOTRUN -> [SKIP][69] ([Intel XE#6900]) [69]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-1/igt@kms_joiner@invalid-modeset-ultra-joiner.html - shard-bmg: NOTRUN -> [SKIP][70] ([Intel XE#6911]) [70]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-10/igt@kms_joiner@invalid-modeset-ultra-joiner.html * igt@kms_pipe_stress@stress-xrgb8888-yftiled: - shard-bmg: NOTRUN -> [SKIP][71] ([Intel XE#6912]) [71]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-3/igt@kms_pipe_stress@stress-xrgb8888-yftiled.html * igt@kms_pipe_stress@stress-xrgb8888-ytiled: - shard-bmg: NOTRUN -> [SKIP][72] ([Intel XE#4329] / [Intel XE#6912]) [72]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-10/igt@kms_pipe_stress@stress-xrgb8888-ytiled.html - shard-lnl: NOTRUN -> [SKIP][73] ([Intel XE#4329] / [Intel XE#6912]) [73]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-1/igt@kms_pipe_stress@stress-xrgb8888-ytiled.html * igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-factor-0-25: - shard-lnl: NOTRUN -> [SKIP][74] ([Intel XE#6886]) +3 other tests skip [74]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-2/igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-factor-0-25.html * igt@kms_plane_scaling@planes-downscale-factor-0-75@pipe-a: - shard-bmg: NOTRUN -> [SKIP][75] ([Intel XE#6886]) +4 other tests skip [75]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-3/igt@kms_plane_scaling@planes-downscale-factor-0-75@pipe-a.html * igt@kms_pm_dc@dc3co-vpb-simulation: - shard-bmg: NOTRUN -> [SKIP][76] ([Intel XE#2391]) [76]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-4/igt@kms_pm_dc@dc3co-vpb-simulation.html * igt@kms_pm_dc@dc6-psr: - shard-lnl: [PASS][77] -> [FAIL][78] ([Intel XE#718]) +1 other test fail [77]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-7/igt@kms_pm_dc@dc6-psr.html [78]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@kms_pm_dc@dc6-psr.html * igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-fully-sf: - shard-lnl: NOTRUN -> [SKIP][79] ([Intel XE#1406] / [Intel XE#2893] / [Intel XE#4608]) [79]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-fully-sf.html * igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-fully-sf@pipe-b-edp-1: - shard-lnl: NOTRUN -> [SKIP][80] ([Intel XE#1406] / [Intel XE#4608]) +1 other test skip [80]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-fully-sf@pipe-b-edp-1.html * igt@kms_psr2_sf@pr-overlay-plane-update-continuous-sf: - shard-lnl: NOTRUN -> [SKIP][81] ([Intel XE#1406] / [Intel XE#2893]) +2 other tests skip [81]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-1/igt@kms_psr2_sf@pr-overlay-plane-update-continuous-sf.html - shard-bmg: NOTRUN -> [SKIP][82] ([Intel XE#1406] / [Intel XE#1489]) +4 other tests skip [82]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-10/igt@kms_psr2_sf@pr-overlay-plane-update-continuous-sf.html * igt@kms_psr2_su@page_flip-nv12: - shard-lnl: NOTRUN -> [SKIP][83] ([Intel XE#1128] / [Intel XE#1406]) [83]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@kms_psr2_su@page_flip-nv12.html * igt@kms_psr@fbc-psr2-no-drrs@edp-1: - shard-lnl: NOTRUN -> [SKIP][84] ([Intel XE#1406] / [Intel XE#4609]) [84]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@kms_psr@fbc-psr2-no-drrs@edp-1.html * igt@kms_psr@pr-cursor-plane-move: - shard-lnl: NOTRUN -> [SKIP][85] ([Intel XE#1406]) +5 other tests skip [85]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@kms_psr@pr-cursor-plane-move.html * igt@kms_psr@pr-primary-render: - shard-bmg: NOTRUN -> [SKIP][86] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) +8 other tests skip [86]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-9/igt@kms_psr@pr-primary-render.html * igt@kms_psr_stress_test@flip-primary-invalidate-overlay: - shard-bmg: NOTRUN -> [SKIP][87] ([Intel XE#1406] / [Intel XE#2414]) [87]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-10/igt@kms_psr_stress_test@flip-primary-invalidate-overlay.html * igt@kms_psr_stress_test@invalidate-primary-flip-overlay: - shard-lnl: [PASS][88] -> [SKIP][89] ([Intel XE#1406] / [Intel XE#4692]) [88]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-4/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html [89]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-7/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html * igt@kms_rotation_crc@primary-rotation-270: - shard-lnl: NOTRUN -> [SKIP][90] ([Intel XE#3414] / [Intel XE#3904]) +1 other test skip [90]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-1/igt@kms_rotation_crc@primary-rotation-270.html * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90: - shard-bmg: NOTRUN -> [SKIP][91] ([Intel XE#3414] / [Intel XE#3904]) +3 other tests skip [91]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-2/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90.html * igt@kms_sharpness_filter@filter-basic: - shard-bmg: NOTRUN -> [SKIP][92] ([Intel XE#6503]) +1 other test skip [92]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-7/igt@kms_sharpness_filter@filter-basic.html * igt@kms_tiled_display@basic-test-pattern: - shard-bmg: NOTRUN -> [FAIL][93] ([Intel XE#1729]) [93]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-10/igt@kms_tiled_display@basic-test-pattern.html * igt@kms_vrr@cmrr@pipe-a-edp-1: - shard-lnl: NOTRUN -> [FAIL][94] ([Intel XE#4459]) +1 other test fail [94]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-3/igt@kms_vrr@cmrr@pipe-a-edp-1.html * igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1: - shard-lnl: [PASS][95] -> [FAIL][96] ([Intel XE#2142]) +1 other test fail [95]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-3/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html [96]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-5/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html * igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all: - shard-lnl: NOTRUN -> [SKIP][97] ([Intel XE#1091] / [Intel XE#2849]) [97]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all.html * igt@xe_compute@ccs-mode-basic: - shard-bmg: NOTRUN -> [SKIP][98] ([Intel XE#6599]) +1 other test skip [98]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-8/igt@xe_compute@ccs-mode-basic.html * igt@xe_compute@ccs-mode-compute-kernel: - shard-lnl: NOTRUN -> [SKIP][99] ([Intel XE#1447]) +1 other test skip [99]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-5/igt@xe_compute@ccs-mode-compute-kernel.html * igt@xe_eudebug@basic-connect: - shard-lnl: NOTRUN -> [SKIP][100] ([Intel XE#4837]) +4 other tests skip [100]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@xe_eudebug@basic-connect.html * igt@xe_eudebug@basic-read-event: - shard-bmg: NOTRUN -> [SKIP][101] ([Intel XE#4837]) +1 other test skip [101]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-3/igt@xe_eudebug@basic-read-event.html * igt@xe_eudebug_online@pagefault-read-stress: - shard-bmg: NOTRUN -> [SKIP][102] ([Intel XE#6665] / [Intel XE#6681]) [102]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-8/igt@xe_eudebug_online@pagefault-read-stress.html * igt@xe_eudebug_online@pagefault-write-stress: - shard-lnl: NOTRUN -> [SKIP][103] ([Intel XE#6665]) [103]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-2/igt@xe_eudebug_online@pagefault-write-stress.html * igt@xe_eudebug_online@preempt-breakpoint: - shard-lnl: NOTRUN -> [SKIP][104] ([Intel XE#4837] / [Intel XE#6665]) +2 other tests skip [104]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-1/igt@xe_eudebug_online@preempt-breakpoint.html * igt@xe_eudebug_online@tdctl-parameters: - shard-bmg: NOTRUN -> [SKIP][105] ([Intel XE#4837] / [Intel XE#6665]) +4 other tests skip [105]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-2/igt@xe_eudebug_online@tdctl-parameters.html * igt@xe_evict@evict-beng-large-external-cm: - shard-lnl: NOTRUN -> [SKIP][106] ([Intel XE#688]) +7 other tests skip [106]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@xe_evict@evict-beng-large-external-cm.html * igt@xe_exec_basic@multigpu-many-execqueues-many-vm-null-rebind: - shard-bmg: NOTRUN -> [SKIP][107] ([Intel XE#2322]) +1 other test skip [107]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-2/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-null-rebind.html * igt@xe_exec_basic@multigpu-once-null-defer-bind: - shard-lnl: NOTRUN -> [SKIP][108] ([Intel XE#1392]) +3 other tests skip [108]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@xe_exec_basic@multigpu-once-null-defer-bind.html * igt@xe_exec_system_allocator@many-large-mmap-race: - shard-lnl: [PASS][109] -> [DMESG-WARN][110] ([Intel XE#7063]) +10 other tests dmesg-warn [109]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-4/igt@xe_exec_system_allocator@many-large-mmap-race.html [110]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-2/igt@xe_exec_system_allocator@many-large-mmap-race.html * igt@xe_exec_system_allocator@many-stride-mmap-free-madvise: - shard-lnl: NOTRUN -> [DMESG-WARN][111] ([Intel XE#7063]) [111]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-2/igt@xe_exec_system_allocator@many-stride-mmap-free-madvise.html * igt@xe_exec_system_allocator@process-many-mmap-huge-nomemset: - shard-lnl: NOTRUN -> [SKIP][112] ([Intel XE#4943]) +9 other tests skip [112]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@xe_exec_system_allocator@process-many-mmap-huge-nomemset.html * igt@xe_exec_system_allocator@threads-many-stride-mmap-new-huge: - shard-bmg: NOTRUN -> [SKIP][113] ([Intel XE#4943]) +9 other tests skip [113]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-10/igt@xe_exec_system_allocator@threads-many-stride-mmap-new-huge.html * igt@xe_exec_threads@threads-bal-mixed-fd-userptr: - shard-bmg: [PASS][114] -> [FAIL][115] ([Intel XE#5625]) [114]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-1/igt@xe_exec_threads@threads-bal-mixed-fd-userptr.html [115]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-7/igt@xe_exec_threads@threads-bal-mixed-fd-userptr.html * igt@xe_intel_bb@bb-with-allocator: - shard-lnl: [PASS][116] -> [DMESG-WARN][117] ([Intel XE#4537] / [Intel XE#7063]) [116]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-3/igt@xe_intel_bb@bb-with-allocator.html [117]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-2/igt@xe_intel_bb@bb-with-allocator.html * igt@xe_live_ktest@xe_eudebug: - shard-lnl: NOTRUN -> [SKIP][118] ([Intel XE#2833]) [118]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@xe_live_ktest@xe_eudebug.html * igt@xe_mmap@pci-membarrier: - shard-lnl: NOTRUN -> [SKIP][119] ([Intel XE#5100]) [119]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-5/igt@xe_mmap@pci-membarrier.html * igt@xe_multigpu_svm@mgpu-atomic-op-prefetch: - shard-bmg: NOTRUN -> [SKIP][120] ([Intel XE#6964]) +2 other tests skip [120]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-3/igt@xe_multigpu_svm@mgpu-atomic-op-prefetch.html * igt@xe_multigpu_svm@mgpu-pagefault-conflict: - shard-lnl: NOTRUN -> [SKIP][121] ([Intel XE#6964]) +3 other tests skip [121]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-2/igt@xe_multigpu_svm@mgpu-pagefault-conflict.html * igt@xe_pat@pat-index-xehpc: - shard-lnl: NOTRUN -> [SKIP][122] ([Intel XE#1420] / [Intel XE#2838]) [122]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-7/igt@xe_pat@pat-index-xehpc.html - shard-bmg: NOTRUN -> [SKIP][123] ([Intel XE#1420]) [123]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-7/igt@xe_pat@pat-index-xehpc.html * igt@xe_pm@d3hot-i2c: - shard-lnl: NOTRUN -> [SKIP][124] ([Intel XE#5742]) [124]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-5/igt@xe_pm@d3hot-i2c.html - shard-bmg: NOTRUN -> [SKIP][125] ([Intel XE#5742]) [125]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-3/igt@xe_pm@d3hot-i2c.html * igt@xe_pm@s3-exec-after: - shard-lnl: NOTRUN -> [SKIP][126] ([Intel XE#584]) [126]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-5/igt@xe_pm@s3-exec-after.html * igt@xe_pmu@all-fn-engine-activity-load: - shard-lnl: NOTRUN -> [SKIP][127] ([Intel XE#4650]) [127]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-7/igt@xe_pmu@all-fn-engine-activity-load.html * igt@xe_query@multigpu-query-topology-l3-bank-mask: - shard-lnl: NOTRUN -> [SKIP][128] ([Intel XE#944]) +2 other tests skip [128]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-5/igt@xe_query@multigpu-query-topology-l3-bank-mask.html - shard-bmg: NOTRUN -> [SKIP][129] ([Intel XE#944]) +1 other test skip [129]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-7/igt@xe_query@multigpu-query-topology-l3-bank-mask.html * igt@xe_sriov_auto_provisioning@selfconfig-basic: - shard-lnl: NOTRUN -> [SKIP][130] ([Intel XE#4130]) [130]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-2/igt@xe_sriov_auto_provisioning@selfconfig-basic.html * igt@xe_sriov_auto_provisioning@selfconfig-reprovision-increase-numvfs: - shard-bmg: [PASS][131] -> [FAIL][132] ([Intel XE#5937]) +1 other test fail [131]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-1/igt@xe_sriov_auto_provisioning@selfconfig-reprovision-increase-numvfs.html [132]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-10/igt@xe_sriov_auto_provisioning@selfconfig-reprovision-increase-numvfs.html * igt@xe_sriov_flr@flr-vfs-parallel: - shard-bmg: [PASS][133] -> [FAIL][134] ([Intel XE#6569]) +1 other test fail [133]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-1/igt@xe_sriov_flr@flr-vfs-parallel.html [134]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-9/igt@xe_sriov_flr@flr-vfs-parallel.html #### Possible fixes #### * igt@core_hotunplug@hotreplug: - shard-bmg: [INCOMPLETE][135] -> [PASS][136] [135]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-3/igt@core_hotunplug@hotreplug.html [136]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-8/igt@core_hotunplug@hotreplug.html * igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1: - shard-lnl: [FAIL][137] ([Intel XE#6054]) -> [PASS][138] +3 other tests pass [137]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-7/igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1.html [138]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-8/igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1.html * igt@kms_atomic@plane-invalid-params@pipe-a-edp-1: - shard-lnl: [DMESG-WARN][139] ([Intel XE#7063]) -> [PASS][140] +6 other tests pass [139]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-1/igt@kms_atomic@plane-invalid-params@pipe-a-edp-1.html [140]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-4/igt@kms_atomic@plane-invalid-params@pipe-a-edp-1.html * igt@kms_flip@2x-flip-vs-suspend: - shard-bmg: [INCOMPLETE][141] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][142] +1 other test pass [141]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-8/igt@kms_flip@2x-flip-vs-suspend.html [142]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-3/igt@kms_flip@2x-flip-vs-suspend.html * igt@kms_plane_lowres@tiling-x: - shard-bmg: [INCOMPLETE][143] ([Intel XE#5681]) -> [PASS][144] [143]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-2/igt@kms_plane_lowres@tiling-x.html [144]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-7/igt@kms_plane_lowres@tiling-x.html * igt@kms_plane_lowres@tiling-x@pipe-c-hdmi-a-3: - shard-bmg: [INCOMPLETE][145] ([Intel XE#6652]) -> [PASS][146] [145]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-2/igt@kms_plane_lowres@tiling-x@pipe-c-hdmi-a-3.html [146]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-7/igt@kms_plane_lowres@tiling-x@pipe-c-hdmi-a-3.html * igt@kms_pm_dc@dc5-dpms: - shard-lnl: [FAIL][147] ([Intel XE#718]) -> [PASS][148] [147]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-5/igt@kms_pm_dc@dc5-dpms.html [148]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-1/igt@kms_pm_dc@dc5-dpms.html * igt@xe_evict@evict-mixed-many-threads-small: - shard-bmg: [INCOMPLETE][149] ([Intel XE#6321]) -> [PASS][150] [149]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-4/igt@xe_evict@evict-mixed-many-threads-small.html [150]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-10/igt@xe_evict@evict-mixed-many-threads-small.html * igt@xe_exec_reset@gt-reset-stress: - shard-lnl: [DMESG-WARN][151] ([Intel XE#7023]) -> [PASS][152] [151]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-lnl-3/igt@xe_exec_reset@gt-reset-stress.html [152]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-lnl-3/igt@xe_exec_reset@gt-reset-stress.html * igt@xe_sriov_auto_provisioning@selfconfig-reprovision-reduce-numvfs: - shard-bmg: [FAIL][153] ([Intel XE#5937]) -> [PASS][154] +1 other test pass [153]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-3/igt@xe_sriov_auto_provisioning@selfconfig-reprovision-reduce-numvfs.html [154]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-9/igt@xe_sriov_auto_provisioning@selfconfig-reprovision-reduce-numvfs.html #### Warnings #### * igt@kms_cursor_legacy@flip-vs-cursor-atomic: - shard-bmg: [FAIL][155] ([Intel XE#6715]) -> [FAIL][156] ([Intel XE#4633]) [155]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8711/shard-bmg-8/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html [156]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/shard-bmg-4/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html [Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091 [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124 [Intel XE#1128]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1128 [Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178 [Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392 [Intel XE#1397]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1397 [Intel XE#1401]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1401 [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406 [Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407 [Intel XE#1420]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1420 [Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421 [Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424 [Intel XE#1447]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1447 [Intel XE#1470]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1470 [Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489 [Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729 [Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745 [Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049 [Intel XE#2142]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2142 [Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234 [Intel XE#2245]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2245 [Intel XE#2248]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2248 [Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252 [Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293 [Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311 [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313 [Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314 [Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320 [Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321 [Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322 [Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325 [Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327 [Intel XE#2372]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2372 [Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380 [Intel XE#2391]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2391 [Intel XE#2414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2414 [Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597 [Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652 [Intel XE#2833]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2833 [Intel XE#2838]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2838 [Intel XE#2849]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2849 [Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850 [Intel XE#2853]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2853 [Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887 [Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893 [Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894 [Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306 [Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307 [Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309 [Intel XE#3278]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3278 [Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304 [Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414 [Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432 [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367 [Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373 [Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904 [Intel XE#4130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4130 [Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141 [Intel XE#4329]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4329 [Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354 [Intel XE#4459]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4459 [Intel XE#4537]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4537 [Intel XE#4608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4608 [Intel XE#4609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4609 [Intel XE#4633]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4633 [Intel XE#4650]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4650 [Intel XE#4692]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4692 [Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837 [Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943 [Intel XE#5100]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5100 [Intel XE#5299]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5299 [Intel XE#5625]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5625 [Intel XE#5681]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5681 [Intel XE#5742]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5742 [Intel XE#584]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/584 [Intel XE#5937]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5937 [Intel XE#6054]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6054 [Intel XE#6312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6312 [Intel XE#6321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6321 [Intel XE#6503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6503 [Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651 [Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656 [Intel XE#6569]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6569 [Intel XE#6599]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6599 [Intel XE#6652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6652 [Intel XE#6665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6665 [Intel XE#6681]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6681 [Intel XE#6715]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6715 [Intel XE#6874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6874 [Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688 [Intel XE#6886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6886 [Intel XE#6900]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6900 [Intel XE#6911]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6911 [Intel XE#6912]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6912 [Intel XE#6964]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6964 [Intel XE#6974]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6974 [Intel XE#701]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/701 [Intel XE#7023]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7023 [Intel XE#7030]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7030 [Intel XE#7061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7061 [Intel XE#7063]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7063 [Intel XE#718]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/718 [Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787 [Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944 [Intel XE#977]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/977 Build changes ------------- * IGT: IGT_8711 -> IGTPW_14394 * Linux: xe-4432-ad2a046603cba140214aed34015ed5027441e85a -> xe-4433-40800011414446888105f6beae6dd3fac56516aa IGTPW_14394: d48f7c6331b0ce4d380643b336f9c81e49aa8b7a @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git IGT_8711: 38428617bae65b39b306f79217ac922ebee3b477 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git xe-4432-ad2a046603cba140214aed34015ed5027441e85a: ad2a046603cba140214aed34015ed5027441e85a xe-4433-40800011414446888105f6beae6dd3fac56516aa: 40800011414446888105f6beae6dd3fac56516aa == Logs == For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_14394/index.html [-- Attachment #2: Type: text/html, Size: 52140 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-02-25 23:18 UTC | newest] Thread overview: 12+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-01-22 7:15 [PATCH v2 0/3] lib/intel: switch graphics/IP version queries to fd-based APIs Xin Wang 2026-01-22 7:15 ` [PATCH v2 1/3] lib/intel: suffix PCI ID based gen/graphics_ver with _legacy Xin Wang 2026-02-04 18:30 ` Matt Roper 2026-01-22 7:15 ` [PATCH v2 2/3] lib/intel: add fd-based intel_gen/intel_graphics_ver via Xe query Xin Wang 2026-02-05 9:09 ` Jani Nikula 2026-01-22 7:15 ` [PATCH v2 3/3] intel/xe: use fd-based graphics/IP version helpers Xin Wang 2026-02-04 18:56 ` Matt Roper 2026-02-25 8:51 ` Wang, X 2026-02-25 23:18 ` Matt Roper 2026-01-22 8:01 ` ✓ i915.CI.BAT: success for lib/intel: switch graphics/IP version queries to fd-based APIs (rev2) Patchwork 2026-01-22 8:04 ` ✓ Xe.CI.BAT: " Patchwork 2026-01-22 18:13 ` ✗ Xe.CI.Full: failure " Patchwork
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox