messages from 2025-09-18 14:18:13 to 2025-09-20 17:18:47 UTC [more...]
[PATCH 0/5] drm/i915/dp: Work around a DSC pixel throughput issue
2025-09-20 17:18 UTC (10+ messages)
` [PATCH 1/5] drm/dp: Add quirk for Synaptics DSC throughput link-bpp limit
` [PATCH 2/5] drm/i915/dp: Calculate DSC slice count based on per-slice peak throughput
` [PATCH 3/5] drm/i915/dp: Pass DPCD device descriptor to intel_dp_get_dsc_sink_cap()
` [PATCH 4/5] drm/i915/dp: Verify branch devices' overall pixel throughput/line width
` [PATCH 5/5] drm/i915/dp: Handle Synaptics DSC throughput link-bpp quirk
` ✗ CI.checkpatch: warning for drm/i915/dp: Work around a DSC pixel throughput issue
` ✓ CI.KUnit: success "
` ✓ Xe.CI.BAT: "
[drm-xe:drm-xe-next] BUILD SUCCESS d9b2623319fa20c2206754284291817488329648
2025-09-20 13:34 UTC
[PATCH] drm/i915: rename vlv_get_cck_clock() to vlv_clock_get_cck()
2025-09-20 6:13 UTC (5+ messages)
` ✗ CI.checkpatch: warning for "
` ✓ CI.KUnit: success "
` ✓ Xe.CI.BAT: "
` ✗ Xe.CI.Full: failure "
[PATCH v2 00/13] drm/1915: skl+ watermark/latency stuff
2025-09-20 4:31 UTC (18+ messages)
` [PATCH v2 01/13] drm/i915/dram: Also apply the 16Gb DIMM w/a for larger DRAM chips
` [PATCH v2 02/13] drm/i915: Apply the 16Gb DIMM w/a only for the platforms that need it
` [PATCH v2 03/13] drm/i915: Tweak the read latency fixup code
` [PATCH v2 04/13] drm/i915: Don't pass the latency array to {skl, mtl}_read_wm_latency()
` [PATCH v2 05/13] drm/i915: Move adjust_wm_latency() out from {mtl, skl}_read_wm_latency()
` [PATCH v2 06/13] drm/i915: Extract multiply_wm_latency() from skl_read_wm_latency()
` [PATCH v2 07/13] drm/i915: Extract increase_wm_latency()
` [PATCH v2 08/13] drm/i915: Use increase_wm_latency() for the 16Gb DIMM w/a
` [PATCH v2 09/13] drm/i915: Extract sanitize_wm_latency()
` [PATCH v2 10/13] drm/i915: Flatten sanitize_wm_latency() a bit
` [PATCH v2 11/13] drm/i915: Make wm latencies monotonic
` [PATCH v2 12/13] drm/i915: Print both the original and adjusted wm latencies
` [PATCH v2 13/13] drm/i915: Make sure wm block/lines are non-decreasing
` ✗ CI.checkpatch: warning for drm/1915: skl+ watermark/latency stuff (rev4)
` ✓ CI.KUnit: success "
` ✓ Xe.CI.BAT: "
` ✗ Xe.CI.Full: failure "
[PATCH 0/3] drm/i915: Fix skl+ watermark linetime stuff
2025-09-20 3:08 UTC (7+ messages)
` [PATCH 1/3] drm/i915: Use the the correct pixel rate to compute wm line time
` [PATCH 2/3] drm/i915: Deobfuscate wm linetime calculation
` [PATCH 3/3] drm/i915: s/intel_get_linetime_us()/skl_wm_linetime_us()/
` ✓ CI.KUnit: success for drm/i915: Fix skl+ watermark linetime stuff
` ✓ Xe.CI.BAT: "
` ✗ Xe.CI.Full: failure "
[PATCH v2 0/3] drm/i915/xe3: Restrict PTL intel_encoder_is_c10phy() to only PHY A
2025-09-20 1:41 UTC (12+ messages)
` [PATCH v2 1/3] drm/pcids: Split PTL pciids group to make wcl subplatform
` [PATCH v2 2/3] drm/i915/display: Add definition for wcl as subplatform
` [PATCH v2 3/3] drm/i915/xe3: Restrict PTL intel_encoder_is_c10phy() to only PHY A
` ✗ CI.checkpatch: warning for drm/i915/xe3: Restrict PTL intel_encoder_is_c10phy() to only PHY A (rev2)
` ✓ CI.KUnit: success "
` ✗ CI.checksparse: warning "
` ✓ Xe.CI.BAT: success "
` ✗ Xe.CI.Full: failure "
[PATCH 0/2] drm/xe/debugfs: Small improvements
2025-09-20 0:52 UTC (8+ messages)
` [PATCH 1/2] drm/xe/debugfs: Make ggtt file per-tile
` [PATCH 2/2] drm/xe/debugfs: Improve .show() helper for GT-based attributes
` ✓ CI.KUnit: success for drm/xe/debugfs: Small improvements
` ✓ Xe.CI.BAT: "
` ✗ Xe.CI.Full: failure "
[PATCH 00/10] Introduce drm sharpness property
2025-09-19 23:32 UTC (16+ messages)
` [RESEND 01/10] drm/drm_crtc: Introduce sharpness strength property
` [RESEND 02/10] drm/i915/display: Introduce HAS_CASF for sharpness support
` [RESEND 03/10] drm/i915/display: Add strength and winsize register
` [RESEND 04/10] drm/i915/display: Add filter lut values
` [RESEND 05/10] drm/i915/display: Compute the scaler coefficients
` [RESEND 06/10] drm/i915/display: Add and compute scaler parameter
` [RESEND 07/10] drm/i915/display: Configure the second scaler
` [RESEND 08/10] drm/i915/display: Set and get the casf config
` [RESEND 09/10] drm/i915/display: Enable/disable casf
` [RESEND 10/10] drm/i915/display: Expose sharpness strength property
` ✗ CI.checkpatch: warning for Introduce drm sharpness property
` ✓ CI.KUnit: success "
` ✗ CI.checksparse: warning "
` ✓ Xe.CI.BAT: success "
` ✓ Xe.CI.Full: "
[PATCH] drm/xe/uapi: loosen used tracking restriction
2025-09-19 23:05 UTC (5+ messages)
` ✓ CI.KUnit: success for "
` ✓ Xe.CI.Full: "
[PATCH v2 00/15] drm/i915: vlv clock cleanups
2025-09-19 22:00 UTC (4+ messages)
[PATCH v3 0/8] [ANDROID]: Add GPU work period support for Xe driver
2025-09-19 18:38 UTC (9+ messages)
` [PATCH v3 1/8] Add a new xe_user structure
` [PATCH v3 2/8] Add xe_gt_clock_interval_to_ns function
` [PATCH v3 3/8] drm/xe: Add a trace point for GPU work period
` [PATCH v3 4/8] drm/xe: Modify xe_exec_queue_update_run_ticks
` [PATCH v3 5/8] Handle xe_user creation and removal
` [PATCH v3 6/8] drm/xe: Implement xe_work_period_worker
` [PATCH v3 7/8] drm/xe: Add a Kconfig option for GPU work period
` [PATCH v3 8/8] Handle xe_work_period destruction
[PATCH 0/6] drm/i915/irq: display irq refactoring
2025-09-19 18:42 UTC (12+ messages)
` [PATCH 1/6] drm/i915/irq: drop intel_psr_regs.h include
` [PATCH 2/6] drm/i915/irq: initialize gen2_imr_mask in terms of enable_mask
` [PATCH 3/6] drm/i915/irq: abstract i9xx_display_irq_enable_mask()
` [PATCH 4/6] drm/i915/irq: move check for HAS_HOTPLUG() inside i9xx_hpd_irq_ack()
` [PATCH 5/6] drm/i915/irq: change ILK irq handling order
` [PATCH 6/6] drm/i915/irq: split ILK display irq handling
` ✗ CI.checkpatch: warning for drm/i915/irq: display irq refactoring
` ✓ CI.KUnit: success "
` ✗ Xe.CI.Full: failure "
[PATCH] drm/xe/pf: Keep VF LMEM BAR size low if no VFs enabled
2025-09-19 16:22 UTC (6+ messages)
` ✓ CI.KUnit: success for "
` ✓ Xe.CI.BAT: "
` ✗ Xe.CI.Full: failure "
[PULL] drm-xe-next
2025-09-19 14:53 UTC
[PATCH 00/10] Introduce drm sharpness property
2025-09-19 13:01 UTC (3+ messages)
linux-next: manual merge of the drm-xe tree with the drm-fixes tree
2025-09-19 12:49 UTC
linux-next: manual merge of the drm-xe tree with the drm-fixes tree
2025-09-19 12:45 UTC
[PATCH v4 0/5] drm/xe/sriov: Don't migrate dmabuf BO to System RAM if P2P check succeeds
2025-09-19 12:29 UTC (7+ messages)
` [PATCH v4 1/5] PCI/P2PDMA: Don't enforce ACS check for device functions of Intel GPUs
[PATCH 0/2] Suspend improvements
2025-09-19 11:04 UTC (9+ messages)
` [PATCH 1/2] drm/xe/pm: Hold the validation lock around evicting user-space bos for suspend
` [PATCH 2/2] drm/xe/pm: Add lockdep annotation for the pm_block completion
` ✓ CI.KUnit: success for Suspend improvements
` ✓ Xe.CI.BAT: "
` ✗ Xe.CI.Full: failure "
[PATCH 0/2] drm/i915/vrr: Hide even more ICL/TGL weirdness
2025-09-19 10:49 UTC (7+ messages)
` [PATCH 1/2] drm/i915/vrr: Hide the ICL/TGL intel_vrr_flipline_offset() mangling better
` [PATCH 2/2] drm/i915/vrr: s/intel_vrr_flipline_offset/intel_vrr_vmin_flipline_offset/
` ✓ CI.KUnit: success for drm/i915/vrr: Hide even more ICL/TGL weirdness
` ✓ Xe.CI.BAT: "
[PATCH v2] drm/i915/ddi: Guard reg_val against a INVALID_TRANSCODER
2025-09-19 10:27 UTC (2+ messages)
[PATCH] drm/i915/display: Set SPREAD_AMP bit to enable SSC
2025-09-19 10:26 UTC (4+ messages)
` ✓ CI.KUnit: success for "
` ✓ Xe.CI.BAT: "
[PATCH 0/2] Allow configfs to disable specific GT type(s)
2025-09-19 9:32 UTC (4+ messages)
` [PATCH 1/2] drm/xe/huc: Adjust HuC check on primary GT
` [PATCH 2/2] drm/xe/configfs: Add attribute to disable GT types
[PATCH] drm/i915/vrr: Refactor VRR live status wait into common helper
2025-09-19 9:04 UTC (2+ messages)
[PATCH 5/5] drm/i915/irq: add ilk_display_irq_reset()
2025-09-19 7:17 UTC (4+ messages)
` [PATCH v3] "
[RFC PATCH] drm/xe/dma-buf: Allow pinning of p2p dma-buf
2025-09-19 7:11 UTC (7+ messages)
[PATCH] drm/xe: Fix build with CONFIG_MODULES=n
2025-09-19 7:00 UTC (2+ messages)
[PATCH v2 00/10] drm/{i915,xe}/fbdev: refactor
2025-09-19 7:00 UTC (23+ messages)
` [PATCH v2 01/10] drm/xe/fbdev: use the same 64-byte stride alignment as i915
` [PATCH v2 02/10] drm/i915/fbdev: make intel_framebuffer_create() error return handling explicit
` [PATCH v2 03/10] drm/{i915, xe}/fbdev: pass struct drm_device to intel_fbdev_fb_alloc()
` [PATCH v2 03/10] drm/{i915,xe}/fbdev: "
` [PATCH v2 04/10] drm/{i915, xe}/fbdev: deduplicate struct drm_mode_fb_cmd2 init
` [PATCH v2 04/10] drm/{i915,xe}/fbdev: "
` [PATCH v2 05/10] drm/i915/fbdev: abstract bo creation
` [PATCH v2 06/10] drm/xe/fbdev: "
` [PATCH v2 07/10] drm/{i915, xe}/fbdev: add intel_fbdev_fb_bo_destroy()
` [PATCH v2 07/10] drm/{i915,xe}/fbdev: "
` [PATCH v2 08/10] drm/{i915,xe}/fbdev: deduplicate fbdev creation
` [PATCH v2 09/10] drm/{i915, xe}/fbdev: pass struct drm_device to intel_fbdev_fb_fill_info()
` [PATCH v2 09/10] drm/{i915,xe}/fbdev: "
` [PATCH v2 10/10] drm/i915/fbdev: drop dependency on display in i915 specific code
` ✓ Xe.CI.Full: success for drm/{i915,xe}/fbdev: refactor (rev2)
[PATCH 0/2] drm/xe: Fix some rebar issues
2025-09-19 6:32 UTC (7+ messages)
` [PATCH 1/2] PCI: Release BAR0 of an integrated bridge to allow GPU BAR resize
` [PATCH 2/2] drm/xe: Move rebar to be done earlier
` ✗ CI.checkpatch: warning for drm/xe: Fix some rebar issues (rev2)
` ✓ CI.KUnit: success "
` ✓ Xe.CI.BAT: "
` ✗ Xe.CI.Full: failure "
[drm-xe:drm-xe-fixes] BUILD SUCCESS 26caeae9fb482ec443753b4e3307e5122b60b850
2025-09-19 1:09 UTC
[PATCH v1 0/5] drm/xe: add VM_BIND DECOMPRESS support and on‑demand decompression
2025-09-19 0:37 UTC (6+ messages)
` [PATCH v1 4/5] drm/xe: implement VM_BIND decompression in vm_bind_ioctl
` ✓ CI.KUnit: success for drm/xe: add VM_BIND DECOMPRESS support and on‑demand decompression
` ✓ Xe.CI.BAT: "
` ✓ Xe.CI.Full: "
[PATCH] drm/xe/guc_pc: Add ignore efficient frequency support
2025-09-19 0:24 UTC (4+ messages)
[PATCH 0/5] drm/i915/irq: clarify and refactor ->irq_mask
2025-09-18 23:11 UTC (17+ messages)
` [PATCH 1/5] drm/i915/irq: use a dedicated IMR cache for VLV/CHV
` [PATCH 2/5] drm/i915/irq: use a dedicated IMR cache for gen 5-7
` [PATCH 3/5] drm/i915/irq: rename irq_mask to gen2_imr_mask
` [PATCH 4/5] drm/i915/irq: rename de_irq_mask[] to de_pipe_imr_mask[]
` ✓ Xe.CI.BAT: success for drm/i915/irq: clarify and refactor ->irq_mask (rev2)
` ✗ CI.checkpatch: warning for drm/i915/irq: clarify and refactor ->irq_mask (rev3)
` ✓ CI.KUnit: success "
` ✗ CI.checksparse: warning "
` ✓ Xe.CI.BAT: success "
` ✗ Xe.CI.Full: failure for drm/i915/irq: clarify and refactor ->irq_mask
` ✓ Xe.CI.Full: success for drm/i915/irq: clarify and refactor ->irq_mask (rev2)
` ✗ Xe.CI.Full: failure for drm/i915/irq: clarify and refactor ->irq_mask (rev3)
[PATCH v5 0/7] drm/xe: Add user commands to WA BB via configfs
2025-09-18 21:47 UTC (2+ messages)
[PATCH v1 0/2] Drop redundant runtime PM usage
2025-09-18 20:20 UTC (5+ messages)
` [PATCH v1 2/2] drm/xe/sysfs: "
` ✓ Xe.CI.Full: success for "
[PULL] drm-intel-next v2 of drm-intel-next-2025-09-12
2025-09-18 19:05 UTC
[PATCH v2] drm/xe/i2c: Don't rely on d3cold.allowed flag in system PM path
2025-09-18 18:52 UTC (3+ messages)
` ✗ Xe.CI.Full: failure for "
[PATCH 0/5] drm/i915/vrr: Hide icl/tgl idiosyncrasies better
2025-09-18 18:00 UTC (9+ messages)
` [PATCH 3/5] drm/i915/vrr: Store guardband in crtc state even for icl/tgl
` [PATCH 5/5] drm/i915/vrr: Move the TGL SCL manging of vmin/vmax/flipline deeper
[PATCH v2 0/3] drm/xe: Allow pinning of VRAM dma-bufs
2025-09-18 17:28 UTC (4+ messages)
` [PATCH v2 3/3] drm/xe/dma-buf: Allow pinning of p2p dma-buf
` ✗ Xe.CI.Full: failure for drm/xe: Allow pinning of VRAM dma-bufs
[PATCH v9 0/9] Introducing firmware late binding
2025-09-18 16:27 UTC (8+ messages)
` [PATCH v9 2/9] mei: late_bind: add late binding component driver
[PATCH 0/3] drm/xe: Fix some rebar issues
2025-09-18 16:15 UTC (3+ messages)
` [PATCH 3/3] drm/xe: Move rebar to its own file
[PATCH 0/2] MADVISE to support multiple PAT indeces
2025-09-18 14:52 UTC (2+ messages)
` ✗ Xe.CI.Full: failure for "
page: next (older) | prev (newer) | latest
- recent:[subjects (threaded)|topics (new)|topics (active)]
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox