* linux-next: manual merge of the drm tree with the origin tree
@ 2024-06-27 15:06 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2024-06-27 15:06 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Alex Deucher, Dillon Varone, Linux Kernel Mailing List,
Linux Next Mailing List, Michael Strauss
[-- Attachment #1: Type: text/plain, Size: 1002 bytes --]
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
between commit:
c03d770c0b014 ("drm/amd/display: Attempt to avoid empty TUs when endpoint is DPIA")
from the origin tree and commit:
0127f0445f7c1 ("drm/amd/display: Refactor input mode programming for DIG FIFO")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
index 199781233fd5f,428912f371291..0000000000000
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2024-06-28 16:51 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2024-06-28 16:51 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Linux Kernel Mailing List, Linux Next Mailing List, Riana Tauro,
Rodrigo Vivi, Thomas Hellström
[-- Attachment #1: Type: text/plain, Size: 958 bytes --]
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
drivers/gpu/drm/xe/xe_gt_idle.c
between commit:
2470b141bfae2 ("drm/xe: move disable_c6 call")
from the origin tree and commits:
6800e63cf97ba ("drm/xe: move disable_c6 call")
38e8c4184ea0e ("drm/xe: Enable Coarse Power Gating")
ecab82af27873 ("drm/xe/vf: Don't support gtidle if VF")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc drivers/gpu/drm/xe/xe_gt_idle.c
index 944770fb2daff,67aba41405100..0000000000000
--- a/drivers/gpu/drm/xe/xe_gt_idle.c
+++ b/drivers/gpu/drm/xe/xe_gt_idle.c
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2025-09-26 12:38 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2025-09-26 12:38 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Linux Kernel Mailing List, Linux Next Mailing List,
Lucas De Marchi, Michal Wajdeczko, Rodrigo Vivi, Zongyao Bai
[-- Attachment #1: Type: text/plain, Size: 1151 bytes --]
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
drivers/gpu/drm/xe/xe_device_sysfs.c
between commits:
ff89a4d285c82 ("drm/xe/sysfs: Add cleanup action in xe_device_sysfs_init")
500dad428e5b0 ("drm/xe/vf: Don't expose sysfs attributes not applicable for VFs")
from the origin tree and commits:
1a869168d91f1 ("drm/xe/sysfs: Add cleanup action in xe_device_sysfs_init")
a2d6223d224f3 ("drm/xe/vf: Don't expose sysfs attributes not applicable for VFs")
fb3c27a69c473 ("drm/xe/sysfs: Simplify sysfs registration")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc drivers/gpu/drm/xe/xe_device_sysfs.c
index 927ee7991696b,c5151c86a98ae..0000000000000
--- a/drivers/gpu/drm/xe/xe_device_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_device_sysfs.c
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2025-10-02 12:05 Mark Brown
2025-10-02 12:30 ` Danilo Krummrich
0 siblings, 1 reply; 16+ messages in thread
From: Mark Brown @ 2025-10-02 12:05 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Alice Ryhl, Andrew Morton, Danilo Krummrich,
Linux Kernel Mailing List, Linux Next Mailing List, Vitaly Wool
[-- Attachment #1: Type: text/plain, Size: 1514 bytes --]
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
rust/kernel/alloc/allocator.rs
between commit:
1b1a946dc2b53 ("rust: alloc: specify the minimum alignment of each allocator")
from the origin tree and commits:
1738796994a43 ("rust: support large alignments in allocations")
8e92c9902ff11 ("rust: alloc: vmalloc: implement Vmalloc::to_page()")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc rust/kernel/alloc/allocator.rs
index 6426ba54cf98d,84ee7e9d7b0eb..0000000000000
--- a/rust/kernel/alloc/allocator.rs
+++ b/rust/kernel/alloc/allocator.rs
@@@ -13,11 -13,14 +13,15 @@@ use core::alloc::Layout
use core::ptr;
use core::ptr::NonNull;
-use crate::alloc::{AllocError, Allocator};
+use crate::alloc::{AllocError, Allocator, NumaNode};
use crate::bindings;
+ use crate::page;
-use crate::pr_warn;
+
+const ARCH_KMALLOC_MINALIGN: usize = bindings::ARCH_KMALLOC_MINALIGN;
+ mod iter;
+ pub use self::iter::VmallocPageIter;
+
/// The contiguous kernel allocator.
///
/// `Kmalloc` is typically used for physically contiguous allocations up to page size, but also
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2025-10-02 12:07 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2025-10-02 12:07 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Linux Kernel Mailing List, Linux Next Mailing List, Rob Herring,
Thomas Zimmermann, Wig Cheng
[-- Attachment #1: Type: text/plain, Size: 1332 bytes --]
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
Documentation/devicetree/bindings/vendor-prefixes.yaml
between commit:
4ed46073274a5 ("dt-bindings: vendor-prefixes: Add undocumented vendor prefixes")
from the origin tree and commit:
09b26dce32f0d ("dt-bindings: vendor-prefixes: Add Mayqueen name")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc Documentation/devicetree/bindings/vendor-prefixes.yaml
index 7aa17199ea434,49a5117d2bbb0..0000000000000
--- a/Documentation/devicetree/bindings/vendor-prefixes.yaml
+++ b/Documentation/devicetree/bindings/vendor-prefixes.yaml
@@@ -965,8 -935,8 +967,10 @@@ patternProperties
description: Maxim Integrated Products
"^maxlinear,.*":
description: MaxLinear Inc.
+ "^maxtor,.*":
+ description: Maxtor Corporation
+ "^mayqueen,.*":
+ description: Mayqueen Technologies Ltd.
"^mbvl,.*":
description: Mobiveil Inc.
"^mcube,.*":
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: linux-next: manual merge of the drm tree with the origin tree
2025-10-02 12:05 Mark Brown
@ 2025-10-02 12:30 ` Danilo Krummrich
0 siblings, 0 replies; 16+ messages in thread
From: Danilo Krummrich @ 2025-10-02 12:30 UTC (permalink / raw)
To: Mark Brown
Cc: Dave Airlie, DRI, Alice Ryhl, Andrew Morton,
Linux Kernel Mailing List, Linux Next Mailing List, Vitaly Wool
On 10/2/25 2:05 PM, Mark Brown wrote:
> Hi all,
>
> Today's linux-next merge of the drm tree got a conflict in:
(I think this was a conflict between the DRM tree and the MM tree already before.)
The resolution looks good to me, thanks!
- Danilo
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2026-01-19 16:53 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2026-01-19 16:53 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Linux Kernel Mailing List, Linux Next Mailing List, Matthew Brost,
Thomas Hellström
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
drivers/gpu/drm/drm_pagemap.c
between commit:
754c232384386 ("drm/pagemap, drm/xe: Ensure that the devmem allocation is idle before use")
from the origin tree and commits:
75af93b3f5d0a ("drm/pagemap, drm/xe: Support destination migration over interconnect")
ec265e1f1cfcc ("drm/pagemap: Support source migration over interconnect")
3902846af36be ("drm/pagemap Fix error paths in drm_pagemap_migrate_to_devmem")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc drivers/gpu/drm/drm_pagemap.c
index 06c1bd8fc4d17,03ee39a761a41..0000000000000
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2026-01-19 17:03 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2026-01-19 17:03 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Alex Deucher, Felix Kuehling, Haoxiang Li,
Linux Kernel Mailing List, Linux Next Mailing List, Philip Yang
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
between commit:
80614c509810f ("drm/amdkfd: fix a memory leak in device_queue_manager_init()")
from the origin tree and commit:
0cba5b27f1924 ("drm/amdkfd: Add domain parameter to alloc kernel BO")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 625ea8ab7a749,b542de9d50d11..0000000000000
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2026-02-02 14:29 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2026-02-02 14:29 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Alex Deucher, Linux Kernel Mailing List, Linux Next Mailing List
[-- Attachment #1: Type: text/plain, Size: 17876 bytes --]
Hi all,
Today's linux-next merge of the drm tree got conflicts in:
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
between commits:
3eb46fbb601f9 ("drm/amdgpu/gfx11: adjust KGQ reset sequence")
dfd64f6e8cd7b ("drm/amdgpu/gfx12: adjust KGQ reset sequence")
from the origin tree and commits:
b340ff216fdab ("drm/amdgpu/gfx11: adjust KGQ reset sequence")
0a6d6ed694d72 ("drm/amdgpu/gfx12: adjust KGQ reset sequence")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --combined drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index e642236ea2c51,427975b5a1d97..0000000000000
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@@ -120,6 -120,10 +120,10 @@@ MODULE_FIRMWARE("amdgpu/gc_11_5_3_pfp.b
MODULE_FIRMWARE("amdgpu/gc_11_5_3_me.bin");
MODULE_FIRMWARE("amdgpu/gc_11_5_3_mec.bin");
MODULE_FIRMWARE("amdgpu/gc_11_5_3_rlc.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_5_4_pfp.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_5_4_me.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_5_4_mec.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_5_4_rlc.bin");
static const struct amdgpu_hwip_reg_entry gc_reg_list_11_0[] = {
SOC15_REG_ENTRY_STR(GC, 0, regGRBM_STATUS),
@@@ -416,7 -420,8 +420,8 @@@ static void gfx11_kiq_unmap_queues(stru
uint32_t eng_sel = ring->funcs->type == AMDGPU_RING_TYPE_GFX ? 4 : 0;
if (adev->enable_mes && !adev->gfx.kiq[0].ring.sched.ready) {
- amdgpu_mes_unmap_legacy_queue(adev, ring, action, gpu_addr, seq);
+ amdgpu_mes_unmap_legacy_queue(adev, ring, action,
+ gpu_addr, seq, 0);
return;
}
@@@ -566,8 -571,8 +571,8 @@@ static int gfx_v11_0_ring_test_ring(str
WREG32(scratch, 0xCAFEDEAD);
r = amdgpu_ring_alloc(ring, 5);
if (r) {
- DRM_ERROR("amdgpu: cp failed to lock ring %d (%d).\n",
- ring->idx, r);
+ drm_err(adev_to_drm(adev), "cp failed to lock ring %d (%d).\n",
+ ring->idx, r);
return r;
}
@@@ -623,7 -628,7 +628,7 @@@ static int gfx_v11_0_ring_test_ib(struc
r = amdgpu_ib_get(adev, NULL, 20, AMDGPU_IB_POOL_DIRECT, &ib);
if (r) {
- DRM_ERROR("amdgpu: failed to get ib (%ld).\n", r);
+ drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r);
goto err1;
}
@@@ -917,7 -922,7 +922,7 @@@ static int gfx_v11_0_rlc_init(struct am
/* init spm vmid with 0xf */
if (adev->gfx.rlc.funcs->update_spm_vmid)
- adev->gfx.rlc.funcs->update_spm_vmid(adev, NULL, 0xf);
+ adev->gfx.rlc.funcs->update_spm_vmid(adev, 0, NULL, 0xf);
return 0;
}
@@@ -1052,10 -1057,14 +1057,14 @@@ static void gfx_v11_0_select_me_pipe_q(
static void gfx_v11_0_get_gfx_shadow_info_nocheck(struct amdgpu_device *adev,
struct amdgpu_gfx_shadow_info *shadow_info)
{
+ /* for gfx */
shadow_info->shadow_size = MQD_SHADOW_BASE_SIZE;
shadow_info->shadow_alignment = MQD_SHADOW_BASE_ALIGNMENT;
shadow_info->csa_size = MQD_FWWORKAREA_SIZE;
shadow_info->csa_alignment = MQD_FWWORKAREA_ALIGNMENT;
+ /* for compute */
+ shadow_info->eop_size = GFX11_MEC_HPD_SIZE;
+ shadow_info->eop_alignment = 256;
}
static int gfx_v11_0_get_gfx_shadow_info(struct amdgpu_device *adev,
@@@ -1080,6 -1089,7 +1089,7 @@@ static const struct amdgpu_gfx_funcs gf
.select_me_pipe_q = &gfx_v11_0_select_me_pipe_q,
.update_perfmon_mgcg = &gfx_v11_0_update_perf_clk,
.get_gfx_shadow_info = &gfx_v11_0_get_gfx_shadow_info,
+ .get_hdp_flush_mask = &amdgpu_gfx_get_hdp_flush_mask,
};
static int gfx_v11_0_gpu_early_init(struct amdgpu_device *adev)
@@@ -1107,6 -1117,7 +1117,7 @@@
case IP_VERSION(11, 5, 1):
case IP_VERSION(11, 5, 2):
case IP_VERSION(11, 5, 3):
+ case IP_VERSION(11, 5, 4):
adev->gfx.config.max_hw_contexts = 8;
adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
@@@ -1589,6 -1600,7 +1600,7 @@@ static int gfx_v11_0_sw_init(struct amd
case IP_VERSION(11, 5, 1):
case IP_VERSION(11, 5, 2):
case IP_VERSION(11, 5, 3):
+ case IP_VERSION(11, 5, 4):
adev->gfx.me.num_me = 1;
adev->gfx.me.num_pipe_per_me = 1;
adev->gfx.me.num_queue_per_pipe = 2;
@@@ -3046,7 -3058,8 +3058,8 @@@ static int gfx_v11_0_wait_for_rlc_autol
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(11, 5, 0) ||
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(11, 5, 1) ||
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(11, 5, 2) ||
- amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(11, 5, 3))
+ amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(11, 5, 3) ||
+ amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(11, 5, 4))
bootload_status = RREG32_SOC15(GC, 0,
regRLC_RLCS_BOOTLOAD_STATUS_gc_11_0_1);
else
@@@ -3617,7 -3630,7 +3630,7 @@@ static int gfx_v11_0_cp_gfx_start(struc
ring = &adev->gfx.gfx_ring[0];
r = amdgpu_ring_alloc(ring, gfx_v11_0_get_csb_size(adev));
if (r) {
- DRM_ERROR("amdgpu: cp failed to lock ring (%d).\n", r);
+ drm_err(&adev->ddev, "cp failed to lock ring (%d).\n", r);
return r;
}
@@@ -3662,7 -3675,7 +3675,7 @@@
ring = &adev->gfx.gfx_ring[1];
r = amdgpu_ring_alloc(ring, 2);
if (r) {
- DRM_ERROR("amdgpu: cp failed to lock ring (%d).\n", r);
+ drm_err(adev_to_drm(adev), "cp failed to lock ring (%d).\n", r);
return r;
}
@@@ -4593,7 -4606,7 +4606,7 @@@ static int gfx_v11_0_cp_resume(struct a
}
if (adev->enable_mes_kiq && adev->mes.kiq_hw_init)
- r = amdgpu_mes_kiq_hw_init(adev);
+ r = amdgpu_mes_kiq_hw_init(adev, 0);
else
r = gfx_v11_0_kiq_resume(adev);
if (r)
@@@ -4783,7 -4796,7 +4796,7 @@@ static int gfx_v11_0_hw_init(struct amd
adev->gfx.is_poweron = true;
if(get_gb_addr_config(adev))
- DRM_WARN("Invalid gb_addr_config !\n");
+ drm_warn(adev_to_drm(adev), "Invalid gb_addr_config !\n");
if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP &&
adev->gfx.rs64_enable)
@@@ -4901,7 -4914,7 +4914,7 @@@ static int gfx_v11_0_hw_fini(struct amd
if (amdgpu_gfx_disable_kcq(adev, 0))
DRM_ERROR("KCQ disable failed\n");
- amdgpu_mes_kiq_hw_fini(adev);
+ amdgpu_mes_kiq_hw_fini(adev, 0);
}
if (amdgpu_sriov_vf(adev))
@@@ -5568,7 -5581,8 +5581,8 @@@ static int gfx_v11_0_update_gfx_clock_g
return 0;
}
- static void gfx_v11_0_update_spm_vmid(struct amdgpu_device *adev, struct amdgpu_ring *ring, unsigned vmid)
+ static void gfx_v11_0_update_spm_vmid(struct amdgpu_device *adev, int xcc_id,
+ struct amdgpu_ring *ring, unsigned vmid)
{
u32 reg, pre_data, data;
@@@ -5633,6 -5647,7 +5647,7 @@@ static void gfx_v11_cntl_power_gating(s
case IP_VERSION(11, 5, 1):
case IP_VERSION(11, 5, 2):
case IP_VERSION(11, 5, 3):
+ case IP_VERSION(11, 5, 4):
WREG32_SOC15(GC, 0, regRLC_PG_DELAY_3, RLC_PG_DELAY_3_DEFAULT_GC_11_0_1);
break;
default:
@@@ -5671,6 -5686,7 +5686,7 @@@ static int gfx_v11_0_set_powergating_st
case IP_VERSION(11, 5, 1):
case IP_VERSION(11, 5, 2):
case IP_VERSION(11, 5, 3):
+ case IP_VERSION(11, 5, 4):
if (!enable)
amdgpu_gfx_off_ctrl(adev, false);
@@@ -5705,6 -5721,7 +5721,7 @@@ static int gfx_v11_0_set_clockgating_st
case IP_VERSION(11, 5, 1):
case IP_VERSION(11, 5, 2):
case IP_VERSION(11, 5, 3):
+ case IP_VERSION(11, 5, 4):
gfx_v11_0_update_gfx_clock_gating(adev,
state == AMD_CG_STATE_GATE);
break;
@@@ -5831,25 -5848,13 +5848,13 @@@ static void gfx_v11_0_ring_emit_hdp_flu
{
struct amdgpu_device *adev = ring->adev;
u32 ref_and_mask, reg_mem_engine;
- const struct nbio_hdp_flush_reg *nbio_hf_reg = adev->nbio.hdp_flush_reg;
- if (ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) {
- switch (ring->me) {
- case 1:
- ref_and_mask = nbio_hf_reg->ref_and_mask_cp2 << ring->pipe;
- break;
- case 2:
- ref_and_mask = nbio_hf_reg->ref_and_mask_cp6 << ring->pipe;
- break;
- default:
- return;
- }
- reg_mem_engine = 0;
- } else {
- ref_and_mask = nbio_hf_reg->ref_and_mask_cp0 << ring->pipe;
- reg_mem_engine = 1; /* pfp */
+ if (!adev->gfx.funcs->get_hdp_flush_mask) {
+ dev_err(adev->dev, "%s: gfx hdp flush is not supported.\n", __func__);
+ return;
}
+ adev->gfx.funcs->get_hdp_flush_mask(ring, &ref_and_mask, ®_mem_engine);
gfx_v11_0_wait_reg_mem(ring, reg_mem_engine, 0, 1,
adev->nbio.funcs->get_hdp_flush_req_offset(adev),
adev->nbio.funcs->get_hdp_flush_done_offset(adev),
@@@ -6664,7 -6669,7 +6669,7 @@@ static int gfx_v11_0_bad_op_irq(struct
struct amdgpu_irq_src *source,
struct amdgpu_iv_entry *entry)
{
- DRM_ERROR("Illegal opcode in command stream \n");
+ DRM_ERROR("Illegal opcode in command stream\n");
gfx_v11_0_handle_priv_fault(adev, entry);
return 0;
}
@@@ -6828,7 -6833,7 +6833,7 @@@ static int gfx_v11_0_reset_kgq(struct a
amdgpu_ring_reset_helper_begin(ring, timedout_fence);
- r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, use_mmio);
+ r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, use_mmio, 0);
if (r) {
dev_warn(adev->dev, "reset via MES failed and try pipe reset %d\n", r);
@@@ -6844,7 -6849,7 +6849,7 @@@
return r;
}
- r = amdgpu_mes_map_legacy_queue(adev, ring);
+ r = amdgpu_mes_map_legacy_queue(adev, ring, 0);
if (r) {
dev_err(adev->dev, "failed to remap kgq\n");
return r;
@@@ -6993,7 -6998,7 +6998,7 @@@ static int gfx_v11_0_reset_kcq(struct a
amdgpu_ring_reset_helper_begin(ring, timedout_fence);
- r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, true);
+ r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, true, 0);
if (r) {
dev_warn(adev->dev, "fail(%d) to reset kcq and try pipe reset\n", r);
r = gfx_v11_0_reset_compute_pipe(ring);
@@@ -7006,7 -7011,7 +7011,7 @@@
dev_err(adev->dev, "fail to init kcq\n");
return r;
}
- r = amdgpu_mes_map_legacy_queue(adev, ring);
+ r = amdgpu_mes_map_legacy_queue(adev, ring, 0);
if (r) {
dev_err(adev->dev, "failed to remap kcq\n");
return r;
@@@ -7480,7 -7485,7 +7485,7 @@@ static int gfx_v11_0_get_cu_info(struc
if (!adev || !cu_info)
return -EINVAL;
- amdgpu_gfx_parse_disable_cu(disable_masks, 8, 2);
+ amdgpu_gfx_parse_disable_cu(adev, disable_masks, 8, 2);
mutex_lock(&adev->grbm_idx_mutex);
for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
diff --combined drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
index 4aab89a9ab401,79ea1af363a53..0000000000000
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
@@@ -355,7 -355,8 +355,8 @@@ static void gfx_v12_0_kiq_unmap_queues(
uint32_t eng_sel = ring->funcs->type == AMDGPU_RING_TYPE_GFX ? 4 : 0;
if (adev->enable_mes && !adev->gfx.kiq[0].ring.sched.ready) {
- amdgpu_mes_unmap_legacy_queue(adev, ring, action, gpu_addr, seq);
+ amdgpu_mes_unmap_legacy_queue(adev, ring, action,
+ gpu_addr, seq, 0);
return;
}
@@@ -458,8 -459,8 +459,8 @@@ static int gfx_v12_0_ring_test_ring(str
WREG32(scratch, 0xCAFEDEAD);
r = amdgpu_ring_alloc(ring, 5);
if (r) {
- dev_err(adev->dev,
- "amdgpu: cp failed to lock ring %d (%d).\n",
+ drm_err(adev_to_drm(adev),
+ "cp failed to lock ring %d (%d).\n",
ring->idx, r);
return r;
}
@@@ -516,7 -517,7 +517,7 @@@ static int gfx_v12_0_ring_test_ib(struc
r = amdgpu_ib_get(adev, NULL, 16, AMDGPU_IB_POOL_DIRECT, &ib);
if (r) {
- dev_err(adev->dev, "amdgpu: failed to get ib (%ld).\n", r);
+ drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r);
goto err1;
}
@@@ -760,7 -761,7 +761,7 @@@ static int gfx_v12_0_rlc_init(struct am
/* init spm vmid with 0xf */
if (adev->gfx.rlc.funcs->update_spm_vmid)
- adev->gfx.rlc.funcs->update_spm_vmid(adev, NULL, 0xf);
+ adev->gfx.rlc.funcs->update_spm_vmid(adev, 0, NULL, 0xf);
return 0;
}
@@@ -908,10 -909,14 +909,14 @@@ static void gfx_v12_0_select_me_pipe_q(
static void gfx_v12_0_get_gfx_shadow_info_nocheck(struct amdgpu_device *adev,
struct amdgpu_gfx_shadow_info *shadow_info)
{
+ /* for gfx */
shadow_info->shadow_size = MQD_SHADOW_BASE_SIZE;
shadow_info->shadow_alignment = MQD_SHADOW_BASE_ALIGNMENT;
shadow_info->csa_size = MQD_FWWORKAREA_SIZE;
shadow_info->csa_alignment = MQD_FWWORKAREA_ALIGNMENT;
+ /* for compute */
+ shadow_info->eop_size = GFX12_MEC_HPD_SIZE;
+ shadow_info->eop_alignment = 256;
}
static int gfx_v12_0_get_gfx_shadow_info(struct amdgpu_device *adev,
@@@ -936,6 -941,7 +941,7 @@@ static const struct amdgpu_gfx_funcs gf
.select_me_pipe_q = &gfx_v12_0_select_me_pipe_q,
.update_perfmon_mgcg = &gfx_v12_0_update_perf_clk,
.get_gfx_shadow_info = &gfx_v12_0_get_gfx_shadow_info,
+ .get_hdp_flush_mask = &amdgpu_gfx_get_hdp_flush_mask,
};
static int gfx_v12_0_gpu_early_init(struct amdgpu_device *adev)
@@@ -3469,7 -3475,7 +3475,7 @@@ static int gfx_v12_0_cp_resume(struct a
}
if (adev->enable_mes_kiq && adev->mes.kiq_hw_init)
- r = amdgpu_mes_kiq_hw_init(adev);
+ r = amdgpu_mes_kiq_hw_init(adev, 0);
else
r = gfx_v12_0_kiq_resume(adev);
if (r)
@@@ -3650,7 -3656,7 +3656,7 @@@ static int gfx_v12_0_hw_init(struct amd
adev->gfx.is_poweron = true;
if (get_gb_addr_config(adev))
- DRM_WARN("Invalid gb_addr_config !\n");
+ drm_warn(adev_to_drm(adev), "Invalid gb_addr_config !\n");
if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP)
gfx_v12_0_config_gfx_rs64(adev);
@@@ -3758,7 -3764,7 +3764,7 @@@ static int gfx_v12_0_hw_fini(struct amd
if (amdgpu_gfx_disable_kcq(adev, 0))
DRM_ERROR("KCQ disable failed\n");
- amdgpu_mes_kiq_hw_fini(adev);
+ amdgpu_mes_kiq_hw_fini(adev, 0);
}
if (amdgpu_sriov_vf(adev)) {
@@@ -3955,6 -3961,7 +3961,7 @@@ static void gfx_v12_0_update_perf_clk(s
}
static void gfx_v12_0_update_spm_vmid(struct amdgpu_device *adev,
+ int xcc_id,
struct amdgpu_ring *ring,
unsigned vmid)
{
@@@ -4386,25 -4393,13 +4393,13 @@@ static void gfx_v12_0_ring_emit_hdp_flu
{
struct amdgpu_device *adev = ring->adev;
u32 ref_and_mask, reg_mem_engine;
- const struct nbio_hdp_flush_reg *nbio_hf_reg = adev->nbio.hdp_flush_reg;
- if (ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) {
- switch (ring->me) {
- case 1:
- ref_and_mask = nbio_hf_reg->ref_and_mask_cp2 << ring->pipe;
- break;
- case 2:
- ref_and_mask = nbio_hf_reg->ref_and_mask_cp6 << ring->pipe;
- break;
- default:
- return;
- }
- reg_mem_engine = 0;
- } else {
- ref_and_mask = nbio_hf_reg->ref_and_mask_cp0;
- reg_mem_engine = 1; /* pfp */
+ if (!adev->gfx.funcs->get_hdp_flush_mask) {
+ dev_err(adev->dev, "%s: gfx hdp flush is not supported.\n", __func__);
+ return;
}
+ adev->gfx.funcs->get_hdp_flush_mask(ring, &ref_and_mask, ®_mem_engine);
gfx_v12_0_wait_reg_mem(ring, reg_mem_engine, 0, 1,
adev->nbio.funcs->get_hdp_flush_req_offset(adev),
adev->nbio.funcs->get_hdp_flush_done_offset(adev),
@@@ -5040,7 -5035,7 +5035,7 @@@ static int gfx_v12_0_bad_op_irq(struct
struct amdgpu_irq_src *source,
struct amdgpu_iv_entry *entry)
{
- DRM_ERROR("Illegal opcode in command stream \n");
+ DRM_ERROR("Illegal opcode in command stream\n");
gfx_v12_0_handle_priv_fault(adev, entry);
return 0;
}
@@@ -5302,7 -5297,7 +5297,7 @@@ static int gfx_v12_0_reset_kgq(struct a
amdgpu_ring_reset_helper_begin(ring, timedout_fence);
- r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, use_mmio);
+ r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, use_mmio, 0);
if (r) {
dev_warn(adev->dev, "reset via MES failed and try pipe reset %d\n", r);
r = gfx_v12_reset_gfx_pipe(ring);
@@@ -5317,7 -5312,7 +5312,7 @@@
return r;
}
- r = amdgpu_mes_map_legacy_queue(adev, ring);
+ r = amdgpu_mes_map_legacy_queue(adev, ring, 0);
if (r) {
dev_err(adev->dev, "failed to remap kgq\n");
return r;
@@@ -5419,7 -5414,7 +5414,7 @@@ static int gfx_v12_0_reset_kcq(struct a
amdgpu_ring_reset_helper_begin(ring, timedout_fence);
- r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, true);
+ r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, true, 0);
if (r) {
dev_warn(adev->dev, "fail(%d) to reset kcq and try pipe reset\n", r);
r = gfx_v12_0_reset_compute_pipe(ring);
@@@ -5432,7 -5427,7 +5427,7 @@@
dev_err(adev->dev, "failed to init kcq\n");
return r;
}
- r = amdgpu_mes_map_legacy_queue(adev, ring);
+ r = amdgpu_mes_map_legacy_queue(adev, ring, 0);
if (r) {
dev_err(adev->dev, "failed to remap kcq\n");
return r;
@@@ -5724,7 -5719,7 +5719,7 @@@ static int gfx_v12_0_get_cu_info(struc
if (!adev || !cu_info)
return -EINVAL;
- amdgpu_gfx_parse_disable_cu(disable_masks, 8, 2);
+ amdgpu_gfx_parse_disable_cu(adev, disable_masks, 8, 2);
mutex_lock(&adev->grbm_idx_mutex);
for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2026-02-08 22:46 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2026-02-08 22:46 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Linux Kernel Mailing List, Linux Next Mailing List, Matthew Brost,
Michal Wajdeczko, Rodrigo Vivi, Satyanarayana K V P,
Shuicheng Lin, Thomas Hellström
[-- Attachment #1: Type: text/plain, Size: 972 bytes --]
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
drivers/gpu/drm/xe/xe_migrate.c
between commit:
e022c16965b834 ("drm/xe: Fix kerneldoc for xe_migrate_exec_queue")
from the origin tree and commits:
fa18290bf0723b ("drm/xe/vf: Shadow buffer management for CCS read/write operations")
5d5ef695497950 ("drm/xe: Fix kerneldoc for xe_migrate_exec_queue")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc drivers/gpu/drm/xe/xe_migrate.c
index 9d7329cef910af,078a9bc2821dd6..00000000000000
--- a/drivers/gpu/drm/xe/xe_migrate.c
+++ b/drivers/gpu/drm/xe/xe_migrate.c
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2026-02-08 22:47 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2026-02-08 22:47 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Ashutosh Dixit, Daniele Ceraolo Spurio, Linux Kernel Mailing List,
Linux Next Mailing List, Matt Roper, Raag Jadav, Riana Tauro,
Rodrigo Vivi, Thomas Hellström
[-- Attachment #1: Type: text/plain, Size: 994 bytes --]
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
drivers/gpu/drm/xe/xe_guc.c
between commit:
4cb1b327135ddd ("drm/xe/guc: Fix CFI violation in debugfs access.")
from the origin tree and commits:
43fb9e113bf11d ("drm/xe/gt: Introduce runtime suspend/resume")
6e035abf98b05f ("drm/xe/guc: Fix CFI violation in debugfs access.")
3947e482b5ebb9 ("drm/xe/guc: Use scope-based cleanup")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc drivers/gpu/drm/xe/xe_guc.c
index edb939f2626851,6df7c3f260e5bd..00000000000000
--- a/drivers/gpu/drm/xe/xe_guc.c
+++ b/drivers/gpu/drm/xe/xe_guc.c
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2026-03-23 15:01 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2026-03-23 15:01 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Alex Deucher, Lijo Lazar, Linux Kernel Mailing List,
Linux Next Mailing List
[-- Attachment #1: Type: text/plain, Size: 26429 bytes --]
Hi all,
Today's linux-next merge of the drm tree got conflicts in:
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
drivers/gpu/drm/amd/amdgpu/mmhub_v2_3.c
drivers/gpu/drm/amd/amdgpu/mmhub_v3_0.c
drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c
drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_2.c
drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
drivers/gpu/drm/amd/amdgpu/mmhub_v4_2_0.c
between commits:
f39e1270277f4 ("drm/amdgpu/gmc9.0: add bounds checking for cid")
9c52f49545478 ("drm/amdgpu/mmhub4.2.0: add bounds checking for cid")
3cdd405831d8c ("drm/amdgpu/mmhub4.1.0: add bounds checking for cid")
cdb82ecbeccb5 ("drm/amdgpu/mmhub3.0: add bounds checking for cid")
e5e6d67b1ce97 ("drm/amdgpu/mmhub3.0.2: add bounds checking for cid")
5d4e88bcfef29 ("drm/amdgpu/mmhub3.0.1: add bounds checking for cid")
a54403a534972 ("drm/amdgpu/mmhub2.3: add bounds checking for cid")
0b26edac4ac55 ("drm/amdgpu/mmhub2.0: add bounds checking for cid")
from the origin tree and commits:
35362833df056 ("drm/amdgpu: Add client ids for mmhub v2.x")
642fb9e14c63a ("drm/amdgpu: Add client ids for mmhub v3.x")
f2eceeef689c8 ("drm/amdgpu: Add client ids for mmhub v4.x")
e14d468304832 ("drm/amdgpu/gmc9.0: add bounds checking for cid")
dea5f235baf37 ("drm/amdgpu/mmhub4.2.0: add bounds checking for cid")
04f063d85090f ("drm/amdgpu/mmhub4.1.0: add bounds checking for cid")
f14f27bbe2a3e ("drm/amdgpu/mmhub3.0: add bounds checking for cid")
1441f52c7f6ae ("drm/amdgpu/mmhub3.0.2: add bounds checking for cid")
5f76083183363 ("drm/amdgpu/mmhub3.0.1: add bounds checking for cid")
89cd90375c19f ("drm/amdgpu/mmhub2.3: add bounds checking for cid")
e064cef4b5355 ("drm/amdgpu/mmhub2.0: add bounds checking for cid")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --combined drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 8eba99aa0f8fa,1ca0202cfdea8..0000000000000
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@@ -660,42 -660,7 +660,7 @@@ static int gmc_v9_0_process_interrupt(s
gfxhub_client_ids[cid],
cid);
} else {
- switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
- case IP_VERSION(9, 0, 0):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega10) ?
- mmhub_client_ids_vega10[cid][rw] : NULL;
- break;
- case IP_VERSION(9, 3, 0):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega12) ?
- mmhub_client_ids_vega12[cid][rw] : NULL;
- break;
- case IP_VERSION(9, 4, 0):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega20) ?
- mmhub_client_ids_vega20[cid][rw] : NULL;
- break;
- case IP_VERSION(9, 4, 1):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_arcturus) ?
- mmhub_client_ids_arcturus[cid][rw] : NULL;
- break;
- case IP_VERSION(9, 1, 0):
- case IP_VERSION(9, 2, 0):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_raven) ?
- mmhub_client_ids_raven[cid][rw] : NULL;
- break;
- case IP_VERSION(1, 5, 0):
- case IP_VERSION(2, 4, 0):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_renoir) ?
- mmhub_client_ids_renoir[cid][rw] : NULL;
- break;
- case IP_VERSION(1, 8, 0):
- case IP_VERSION(9, 4, 2):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_aldebaran) ?
- mmhub_client_ids_aldebaran[cid][rw] : NULL;
- break;
- default:
- mmhub_cid = NULL;
- break;
- }
+ mmhub_cid = amdgpu_mmhub_client_name(&adev->mmhub, cid, rw);
dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n",
mmhub_cid ? mmhub_cid : "unknown", cid);
}
@@@ -1435,6 -1400,52 +1400,52 @@@ static void gmc_v9_0_set_umc_funcs(stru
}
}
+ static void gmc_v9_0_init_mmhub_client_info(struct amdgpu_device *adev)
+ {
+ switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
+ case IP_VERSION(9, 0, 0):
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_vega10,
+ ARRAY_SIZE(mmhub_client_ids_vega10));
+ break;
+ case IP_VERSION(9, 3, 0):
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_vega12,
+ ARRAY_SIZE(mmhub_client_ids_vega12));
+ break;
+ case IP_VERSION(9, 4, 0):
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_vega20,
+ ARRAY_SIZE(mmhub_client_ids_vega20));
+ break;
+ case IP_VERSION(9, 4, 1):
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_arcturus,
+ ARRAY_SIZE(mmhub_client_ids_arcturus));
+ break;
+ case IP_VERSION(9, 1, 0):
+ case IP_VERSION(9, 2, 0):
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_raven,
+ ARRAY_SIZE(mmhub_client_ids_raven));
+ break;
+ case IP_VERSION(1, 5, 0):
+ case IP_VERSION(2, 4, 0):
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_renoir,
+ ARRAY_SIZE(mmhub_client_ids_renoir));
+ break;
+ case IP_VERSION(1, 8, 0):
+ case IP_VERSION(9, 4, 2):
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_aldebaran,
+ ARRAY_SIZE(mmhub_client_ids_aldebaran));
+ break;
+ default:
+ break;
+ }
+ }
+
static void gmc_v9_0_set_mmhub_funcs(struct amdgpu_device *adev)
{
switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
@@@ -1452,6 -1463,8 +1463,8 @@@
adev->mmhub.funcs = &mmhub_v1_0_funcs;
break;
}
+
+ gmc_v9_0_init_mmhub_client_info(adev);
}
static void gmc_v9_0_set_mmhub_ras_funcs(struct amdgpu_device *adev)
diff --combined drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
index 534cb4c544dc4,42a09a277ec3e..0000000000000
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
@@@ -141,7 -141,7 +141,7 @@@ mmhub_v2_0_print_l2_protection_fault_st
uint32_t status)
{
uint32_t cid, rw;
- const char *mmhub_cid = NULL;
+ const char *mmhub_cid;
cid = REG_GET_FIELD(status,
MMVM_L2_PROTECTION_FAULT_STATUS, CID);
@@@ -151,25 -151,7 +151,7 @@@
dev_err(adev->dev,
"MMVM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
status);
- switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
- case IP_VERSION(2, 0, 0):
- case IP_VERSION(2, 0, 2):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_navi1x) ?
- mmhub_client_ids_navi1x[cid][rw] : NULL;
- break;
- case IP_VERSION(2, 1, 0):
- case IP_VERSION(2, 1, 1):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_sienna_cichlid) ?
- mmhub_client_ids_sienna_cichlid[cid][rw] : NULL;
- break;
- case IP_VERSION(2, 1, 2):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_beige_goby) ?
- mmhub_client_ids_beige_goby[cid][rw] : NULL;
- break;
- default:
- mmhub_cid = NULL;
- break;
- }
+ mmhub_cid = amdgpu_mmhub_client_name(&adev->mmhub, cid, rw);
dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n",
mmhub_cid ? mmhub_cid : "unknown", cid);
dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",
@@@ -521,6 -503,31 +503,31 @@@ static const struct amdgpu_vmhub_funcs
.get_invalidate_req = mmhub_v2_0_get_invalidate_req,
};
+ static void mmhub_v2_0_init_client_info(struct amdgpu_device *adev)
+ {
+ switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
+ case IP_VERSION(2, 0, 0):
+ case IP_VERSION(2, 0, 2):
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_navi1x,
+ ARRAY_SIZE(mmhub_client_ids_navi1x));
+ break;
+ case IP_VERSION(2, 1, 0):
+ case IP_VERSION(2, 1, 1):
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_sienna_cichlid,
+ ARRAY_SIZE(mmhub_client_ids_sienna_cichlid));
+ break;
+ case IP_VERSION(2, 1, 2):
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_beige_goby,
+ ARRAY_SIZE(mmhub_client_ids_beige_goby));
+ break;
+ default:
+ break;
+ }
+ }
+
static void mmhub_v2_0_init(struct amdgpu_device *adev)
{
struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_MMHUB0(0)];
@@@ -561,6 -568,8 +568,8 @@@
MMVM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK;
hub->vmhub_funcs = &mmhub_v2_0_vmhub_funcs;
+
+ mmhub_v2_0_init_client_info(adev);
}
static void mmhub_v2_0_update_medium_grain_clock_gating(struct amdgpu_device *adev,
diff --combined drivers/gpu/drm/amd/amdgpu/mmhub_v2_3.c
index ceb2f6b46de52,31c479d76c421..0000000000000
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v2_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v2_3.c
@@@ -80,7 -80,7 +80,7 @@@ mmhub_v2_3_print_l2_protection_fault_st
uint32_t status)
{
uint32_t cid, rw;
- const char *mmhub_cid = NULL;
+ const char *mmhub_cid;
cid = REG_GET_FIELD(status,
MMVM_L2_PROTECTION_FAULT_STATUS, CID);
@@@ -90,17 -90,7 +90,7 @@@
dev_err(adev->dev,
"MMVM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
status);
- switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
- case IP_VERSION(2, 3, 0):
- case IP_VERSION(2, 4, 0):
- case IP_VERSION(2, 4, 1):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vangogh) ?
- mmhub_client_ids_vangogh[cid][rw] : NULL;
- break;
- default:
- mmhub_cid = NULL;
- break;
- }
+ mmhub_cid = amdgpu_mmhub_client_name(&adev->mmhub, cid, rw);
dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n",
mmhub_cid ? mmhub_cid : "unknown", cid);
dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",
@@@ -487,6 -477,10 +477,10 @@@ static void mmhub_v2_3_init(struct amdg
MMVM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK;
hub->vmhub_funcs = &mmhub_v2_3_vmhub_funcs;
+
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_vangogh,
+ ARRAY_SIZE(mmhub_client_ids_vangogh));
}
static void
diff --combined drivers/gpu/drm/amd/amdgpu/mmhub_v3_0.c
index ab966e69a342a,3d82cfa0f1b51..0000000000000
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0.c
@@@ -97,7 -97,7 +97,7 @@@ mmhub_v3_0_print_l2_protection_fault_st
uint32_t status)
{
uint32_t cid, rw;
- const char *mmhub_cid = NULL;
+ const char *mmhub_cid;
cid = REG_GET_FIELD(status,
MMVM_L2_PROTECTION_FAULT_STATUS, CID);
@@@ -107,16 -107,7 +107,7 @@@
dev_err(adev->dev,
"MMVM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
status);
- switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
- case IP_VERSION(3, 0, 0):
- case IP_VERSION(3, 0, 1):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_0) ?
- mmhub_client_ids_v3_0_0[cid][rw] : NULL;
- break;
- default:
- mmhub_cid = NULL;
- break;
- }
+ mmhub_cid = amdgpu_mmhub_client_name(&adev->mmhub, cid, rw);
dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n",
mmhub_cid ? mmhub_cid : "unknown", cid);
dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",
@@@ -521,6 -512,10 +512,10 @@@ static void mmhub_v3_0_init(struct amdg
SOC15_REG_OFFSET(MMHUB, 0, regMMVM_CONTEXTS_DISABLE);
hub->vmhub_funcs = &mmhub_v3_0_vmhub_funcs;
+
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_v3_0_0,
+ ARRAY_SIZE(mmhub_client_ids_v3_0_0));
}
static u64 mmhub_v3_0_get_fb_location(struct amdgpu_device *adev)
diff --combined drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c
index 14a742d3a99d7,a1b0b7b39a42a..0000000000000
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c
@@@ -104,7 -104,7 +104,7 @@@ mmhub_v3_0_1_print_l2_protection_fault_
uint32_t status)
{
uint32_t cid, rw;
- const char *mmhub_cid = NULL;
+ const char *mmhub_cid;
cid = REG_GET_FIELD(status,
MMVM_L2_PROTECTION_FAULT_STATUS, CID);
@@@ -114,17 -114,7 +114,7 @@@
dev_err(adev->dev,
"MMVM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
status);
-
- switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
- case IP_VERSION(3, 0, 1):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_1) ?
- mmhub_client_ids_v3_0_1[cid][rw] : NULL;
- break;
- default:
- mmhub_cid = NULL;
- break;
- }
-
+ mmhub_cid = amdgpu_mmhub_client_name(&adev->mmhub, cid, rw);
dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n",
mmhub_cid ? mmhub_cid : "unknown", cid);
dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",
@@@ -504,6 -494,10 +494,10 @@@ static void mmhub_v3_0_1_init(struct am
MMVM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK;
hub->vmhub_funcs = &mmhub_v3_0_1_vmhub_funcs;
+
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_v3_0_1,
+ ARRAY_SIZE(mmhub_client_ids_v3_0_1));
}
static u64 mmhub_v3_0_1_get_fb_location(struct amdgpu_device *adev)
diff --combined drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_2.c
index e1f07f2a18527,34e8dbd47c0f8..0000000000000
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_2.c
@@@ -97,7 -97,7 +97,7 @@@ mmhub_v3_0_2_print_l2_protection_fault_
uint32_t status)
{
uint32_t cid, rw;
- const char *mmhub_cid = NULL;
+ const char *mmhub_cid;
cid = REG_GET_FIELD(status,
MMVM_L2_PROTECTION_FAULT_STATUS, CID);
@@@ -107,9 -107,7 +107,7 @@@
dev_err(adev->dev,
"MMVM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
status);
-
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_2) ?
- mmhub_client_ids_v3_0_2[cid][rw] : NULL;
+ mmhub_cid = amdgpu_mmhub_client_name(&adev->mmhub, cid, rw);
dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n",
mmhub_cid ? mmhub_cid : "unknown", cid);
dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",
@@@ -510,6 -508,10 +508,10 @@@ static void mmhub_v3_0_2_init(struct am
SOC15_REG_OFFSET(MMHUB, 0, regMMVM_L2_BANK_SELECT_RESERVED_CID2);
hub->vmhub_funcs = &mmhub_v3_0_2_vmhub_funcs;
+
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_v3_0_2,
+ ARRAY_SIZE(mmhub_client_ids_v3_0_2));
}
static u64 mmhub_v3_0_2_get_fb_location(struct amdgpu_device *adev)
diff --combined drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
index 88bfe321f83aa,bef75c4c48d3e..0000000000000
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
@@@ -90,7 -90,7 +90,7 @@@ mmhub_v4_1_0_print_l2_protection_fault_
uint32_t status)
{
uint32_t cid, rw;
- const char *mmhub_cid = NULL;
+ const char *mmhub_cid;
cid = REG_GET_FIELD(status,
MMVM_L2_PROTECTION_FAULT_STATUS_LO32, CID);
@@@ -100,15 -100,7 +100,7 @@@
dev_err(adev->dev,
"MMVM_L2_PROTECTION_FAULT_STATUS_LO32:0x%08X\n",
status);
- switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
- case IP_VERSION(4, 1, 0):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v4_1_0) ?
- mmhub_client_ids_v4_1_0[cid][rw] : NULL;
- break;
- default:
- mmhub_cid = NULL;
- break;
- }
+ mmhub_cid = amdgpu_mmhub_client_name(&adev->mmhub, cid, rw);
dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n",
mmhub_cid ? mmhub_cid : "unknown", cid);
dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",
@@@ -515,6 -507,10 +507,10 @@@ static void mmhub_v4_1_0_init(struct am
SOC15_REG_OFFSET(MMHUB, 0, regMMVM_CONTEXTS_DISABLE);
hub->vmhub_funcs = &mmhub_v4_1_0_vmhub_funcs;
+
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_v4_1_0,
+ ARRAY_SIZE(mmhub_client_ids_v4_1_0));
}
static u64 mmhub_v4_1_0_get_fb_location(struct amdgpu_device *adev)
diff --combined drivers/gpu/drm/amd/amdgpu/mmhub_v4_2_0.c
index 2532ca80f7356,29f7ed4668587..0000000000000
--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v4_2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v4_2_0.c
@@@ -72,6 -72,45 +72,45 @@@ static const char *mmhub_client_ids_v4_
[23][1] = "VCN1",
};
+ static int mmhub_v4_2_0_get_xgmi_info(struct amdgpu_device *adev)
+ {
+ u32 max_num_physical_nodes;
+ u32 max_physical_node_id;
+ u32 xgmi_lfb_cntl;
+ u32 max_region;
+ u64 seg_size;
+
+ /* limit this callback to A + A configuration only */
+ if (!adev->gmc.xgmi.connected_to_cpu)
+ return 0;
+
+ xgmi_lfb_cntl = RREG32_SOC15(MMHUB, GET_INST(MMHUB, 0),
+ regMMMC_VM_XGMI_LFB_CNTL);
+ seg_size = REG_GET_FIELD(
+ RREG32_SOC15(MMHUB, GET_INST(MMHUB, 0), regMMMC_VM_XGMI_LFB_SIZE),
+ MMMC_VM_XGMI_LFB_SIZE, PF_LFB_SIZE) << 24;
+ max_region =
+ REG_GET_FIELD(xgmi_lfb_cntl, MMMC_VM_XGMI_LFB_CNTL, PF_MAX_REGION);
+
+ max_num_physical_nodes = 4;
+ max_physical_node_id = 3;
+
+ adev->gmc.xgmi.num_physical_nodes = max_region + 1;
+
+ if (adev->gmc.xgmi.num_physical_nodes > max_num_physical_nodes)
+ return -EINVAL;
+
+ adev->gmc.xgmi.physical_node_id =
+ REG_GET_FIELD(xgmi_lfb_cntl, MMMC_VM_XGMI_LFB_CNTL, PF_LFB_REGION);
+
+ if (adev->gmc.xgmi.physical_node_id > max_physical_node_id)
+ return -EINVAL;
+
+ adev->gmc.xgmi.node_segment_size = seg_size;
+
+ return 0;
+ }
+
static u64 mmhub_v4_2_0_get_fb_location(struct amdgpu_device *adev)
{
u64 base;
@@@ -131,7 -170,7 +170,7 @@@ static void mmhub_v4_2_0_setup_vm_pt_re
static void mmhub_v4_2_0_mid_init_gart_aperture_regs(struct amdgpu_device *adev,
uint32_t mid_mask)
{
- uint64_t pt_base = amdgpu_gmc_pd_addr(adev->gart.bo);
+ uint64_t pt_base;
int i;
if (adev->gmc.pdb0_bo)
@@@ -152,10 -191,10 +191,10 @@@
WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
regMMVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32,
- (u32)(adev->gmc.fb_end >> 12));
+ (u32)(adev->gmc.gart_end >> 12));
WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
regMMVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32,
- (u32)(adev->gmc.fb_end >> 44));
+ (u32)(adev->gmc.gart_end >> 44));
} else {
WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
regMMVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32,
@@@ -190,41 -229,74 +229,74 @@@ static void mmhub_v4_2_0_mid_init_syste
return;
for_each_inst(i, mid_mask) {
- /* Program the AGP BAR */
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_AGP_BASE_LO32, 0);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_AGP_BASE_HI32, 0);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_AGP_BOT_LO32,
- lower_32_bits(adev->gmc.agp_start >> 24));
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_AGP_BOT_HI32,
- upper_32_bits(adev->gmc.agp_start >> 24));
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_AGP_TOP_LO32,
- lower_32_bits(adev->gmc.agp_end >> 24));
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_AGP_TOP_HI32,
- upper_32_bits(adev->gmc.agp_end >> 24));
+ if (adev->gmc.pdb0_bo) {
+ /* Disable agp and system aperture
+ * when vmid0 page table is enabled */
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_FB_LOCATION_TOP_LO32, 0);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_FB_LOCATION_TOP_HI32, 0);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_FB_LOCATION_BASE_LO32,
+ 0xFFFFFFFF);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_FB_LOCATION_BASE_HI32, 1);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_AGP_TOP_LO32, 0);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_AGP_TOP_HI32, 0);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_AGP_BOT_LO32,
+ 0xFFFFFFFF);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_AGP_BOT_HI32, 1);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_SYSTEM_APERTURE_LOW_ADDR_LO32,
+ 0xFFFFFFFF);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_SYSTEM_APERTURE_LOW_ADDR_HI32,
+ 0x7F);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_SYSTEM_APERTURE_HIGH_ADDR_LO32, 0);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_SYSTEM_APERTURE_HIGH_ADDR_HI32, 0);
+ } else {
+ /* Program the AGP BAR */
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_AGP_BASE_LO32, 0);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_AGP_BASE_HI32, 0);
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_AGP_BOT_LO32,
+ lower_32_bits(adev->gmc.agp_start >> 24));
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_AGP_BOT_HI32,
+ upper_32_bits(adev->gmc.agp_start >> 24));
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_AGP_TOP_LO32,
+ lower_32_bits(adev->gmc.agp_end >> 24));
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_AGP_TOP_HI32,
+ upper_32_bits(adev->gmc.agp_end >> 24));
- /* Program the system aperture low logical page number. */
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_SYSTEM_APERTURE_LOW_ADDR_LO32,
- lower_32_bits(min(adev->gmc.fb_start,
- adev->gmc.agp_start) >> 18));
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_SYSTEM_APERTURE_LOW_ADDR_HI32,
- upper_32_bits(min(adev->gmc.fb_start,
- adev->gmc.agp_start) >> 18));
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_SYSTEM_APERTURE_HIGH_ADDR_LO32,
- lower_32_bits(max(adev->gmc.fb_end,
- adev->gmc.agp_end) >> 18));
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_SYSTEM_APERTURE_HIGH_ADDR_HI32,
- upper_32_bits(max(adev->gmc.fb_end,
- adev->gmc.agp_end) >> 18));
+ /* Program the system aperture low logical page number. */
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_SYSTEM_APERTURE_LOW_ADDR_LO32,
+ lower_32_bits(min(adev->gmc.fb_start,
+ adev->gmc.agp_start) >> 18));
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_SYSTEM_APERTURE_LOW_ADDR_HI32,
+ upper_32_bits(min(adev->gmc.fb_start,
+ adev->gmc.agp_start) >> 18));
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_SYSTEM_APERTURE_HIGH_ADDR_LO32,
+ lower_32_bits(max(adev->gmc.fb_end,
+ adev->gmc.agp_end) >> 18));
+ WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
+ regMMMC_VM_SYSTEM_APERTURE_HIGH_ADDR_HI32,
+ upper_32_bits(max(adev->gmc.fb_end,
+ adev->gmc.agp_end) >> 18));
+ }
/* Set default page address. */
value = amdgpu_gmc_vram_mc2pa(adev, adev->mem_scratch.gpu_addr);
@@@ -252,38 -324,6 +324,6 @@@
WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
regMMVM_L2_PROTECTION_FAULT_CNTL2, tmp);
}
-
- /* In the case squeezing vram into GART aperture, we don't use
- * FB aperture and AGP aperture. Disable them.
- */
- if (adev->gmc.pdb0_bo) {
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_FB_LOCATION_TOP_LO32, 0);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_FB_LOCATION_TOP_HI32, 0);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_FB_LOCATION_BASE_LO32, 0xFFFFFFFF);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_FB_LOCATION_BASE_HI32, 1);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_AGP_TOP_LO32, 0);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_AGP_TOP_HI32, 0);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_AGP_BOT_LO32, 0xFFFFFFFF);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_AGP_BOT_HI32, 1);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_SYSTEM_APERTURE_LOW_ADDR_LO32,
- 0xFFFFFFFF);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_SYSTEM_APERTURE_LOW_ADDR_HI32,
- 0x7F);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_SYSTEM_APERTURE_HIGH_ADDR_LO32, 0);
- WREG32_SOC15(MMHUB, GET_INST(MMHUB, i),
- regMMMC_VM_SYSTEM_APERTURE_HIGH_ADDR_HI32, 0);
- }
}
static void mmhub_v4_2_0_mid_init_tlb_regs(struct amdgpu_device *adev,
@@@ -676,7 -716,7 +716,7 @@@ mmhub_v4_2_0_print_l2_protection_fault_
uint32_t status)
{
uint32_t cid, rw;
- const char *mmhub_cid = NULL;
+ const char *mmhub_cid;
cid = REG_GET_FIELD(status,
MMVM_L2_PROTECTION_FAULT_STATUS_LO32, CID);
@@@ -686,15 -726,7 +726,7 @@@
dev_err(adev->dev,
"MMVM_L2_PROTECTION_FAULT_STATUS_LO32:0x%08X\n",
status);
- switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
- case IP_VERSION(4, 2, 0):
- mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v4_2_0) ?
- mmhub_client_ids_v4_2_0[cid][rw] : NULL;
- break;
- default:
- mmhub_cid = NULL;
- break;
- }
+ mmhub_cid = amdgpu_mmhub_client_name(&adev->mmhub, cid, rw);
dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n",
mmhub_cid ? mmhub_cid : "unknown", cid);
dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",
@@@ -785,6 -817,10 +817,10 @@@ static void mmhub_v4_2_0_init(struct am
mid_mask = adev->aid_mask;
mmhub_v4_2_0_mid_init(adev, mid_mask);
+
+ amdgpu_mmhub_init_client_info(&adev->mmhub,
+ mmhub_client_ids_v4_2_0,
+ ARRAY_SIZE(mmhub_client_ids_v4_2_0));
}
static void
@@@ -884,6 -920,7 +920,7 @@@ const struct amdgpu_mmhub_funcs mmhub_v
.set_fault_enable_default = mmhub_v4_2_0_set_fault_enable_default,
.set_clockgating = mmhub_v4_2_0_set_clockgating,
.get_clockgating = mmhub_v4_2_0_get_clockgating,
+ .get_xgmi_info = mmhub_v4_2_0_get_xgmi_info,
};
static int mmhub_v4_2_0_xcp_resume(void *handle, uint32_t inst_mask)
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2026-03-23 15:49 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2026-03-23 15:49 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Linux Kernel Mailing List, Linux Next Mailing List,
Maarten Lankhorst, Matthew Brost, Thomas Hellström
[-- Attachment #1: Type: text/plain, Size: 2798 bytes --]
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
drivers/gpu/drm/xe/xe_ggtt_types.h
between commit:
01f2557aa684e ("drm/xe: Open-code GGTT MMIO access protection")
from the origin tree and commit:
95f5f9a96dcfb ("drm/xe: Move struct xe_ggtt to xe_ggtt.c")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc drivers/gpu/drm/xe/xe_ggtt_types.h
index c002857bb7611,cf754e4d502ad..0000000000000
--- a/drivers/gpu/drm/xe/xe_ggtt_types.h
+++ b/drivers/gpu/drm/xe/xe_ggtt_types.h
@@@ -6,57 -6,12 +6,13 @@@
#ifndef _XE_GGTT_TYPES_H_
#define _XE_GGTT_TYPES_H_
+ #include <linux/types.h>
#include <drm/drm_mm.h>
- #include "xe_pt_types.h"
-
- struct xe_bo;
+ struct xe_ggtt;
struct xe_ggtt_node;
+struct xe_gt;
- /**
- * struct xe_ggtt - Main GGTT struct
- *
- * In general, each tile can contains its own Global Graphics Translation Table
- * (GGTT) instance.
- */
- struct xe_ggtt {
- /** @tile: Back pointer to tile where this GGTT belongs */
- struct xe_tile *tile;
- /** @start: Start offset of GGTT */
- u64 start;
- /** @size: Total usable size of this GGTT */
- u64 size;
-
- #define XE_GGTT_FLAGS_64K BIT(0)
- #define XE_GGTT_FLAGS_ONLINE BIT(1)
- /**
- * @flags: Flags for this GGTT
- * Acceptable flags:
- * - %XE_GGTT_FLAGS_64K - if PTE size is 64K. Otherwise, regular is 4K.
- * - %XE_GGTT_FLAGS_ONLINE - is GGTT online, protected by ggtt->lock
- * after init
- */
- unsigned int flags;
- /** @scratch: Internal object allocation used as a scratch page */
- struct xe_bo *scratch;
- /** @lock: Mutex lock to protect GGTT data */
- struct mutex lock;
- /**
- * @gsm: The iomem pointer to the actual location of the translation
- * table located in the GSM for easy PTE manipulation
- */
- u64 __iomem *gsm;
- /** @pt_ops: Page Table operations per platform */
- const struct xe_ggtt_pt_ops *pt_ops;
- /** @mm: The memory manager used to manage individual GGTT allocations */
- struct drm_mm mm;
- /** @access_count: counts GGTT writes */
- unsigned int access_count;
- /** @wq: Dedicated unordered work queue to process node removals */
- struct workqueue_struct *wq;
- };
-
typedef void (*xe_ggtt_set_pte_fn)(struct xe_ggtt *ggtt, u64 addr, u64 pte);
typedef void (*xe_ggtt_transform_cb)(struct xe_ggtt *ggtt,
struct xe_ggtt_node *node,
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2026-03-23 15:55 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2026-03-23 15:55 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Linux Kernel Mailing List, Linux Next Mailing List,
Maarten Lankhorst, Matthew Brost, Thomas Hellström
[-- Attachment #1: Type: text/plain, Size: 2304 bytes --]
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
drivers/gpu/drm/xe/xe_ggtt.c
between commit:
01f2557aa684e ("drm/xe: Open-code GGTT MMIO access protection")
from the origin tree and commit:
e904c56ba6e0d ("drm/xe: Rewrite GGTT VF initialization")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc drivers/gpu/drm/xe/xe_ggtt.c
index d1561ebe4e56c,0f2e3af499120..0000000000000
--- a/drivers/gpu/drm/xe/xe_ggtt.c
+++ b/drivers/gpu/drm/xe/xe_ggtt.c
@@@ -379,18 -437,7 +439,8 @@@ int xe_ggtt_init_early(struct xe_ggtt *
if (err)
return err;
+ ggtt->flags |= XE_GGTT_FLAGS_ONLINE;
- err = devm_add_action_or_reset(xe->drm.dev, dev_fini_ggtt, ggtt);
- if (err)
- return err;
-
- if (IS_SRIOV_VF(xe)) {
- err = xe_tile_sriov_vf_prepare_ggtt(ggtt->tile);
- if (err)
- return err;
- }
-
- return 0;
+ return devm_add_action_or_reset(xe->drm.dev, dev_fini_ggtt, ggtt);
}
ALLOW_ERROR_INJECTION(xe_ggtt_init_early, ERRNO); /* See xe_pci_probe() */
@@@ -413,12 -465,15 +468,12 @@@ static void ggtt_node_fini(struct xe_gg
static void ggtt_node_remove(struct xe_ggtt_node *node)
{
struct xe_ggtt *ggtt = node->ggtt;
- struct xe_device *xe = tile_to_xe(ggtt->tile);
bool bound;
- int idx;
-
- bound = drm_dev_enter(&xe->drm, &idx);
mutex_lock(&ggtt->lock);
+ bound = ggtt->flags & XE_GGTT_FLAGS_ONLINE;
if (bound)
- xe_ggtt_clear(ggtt, node->base.start, node->base.size);
+ xe_ggtt_clear(ggtt, xe_ggtt_node_addr(node), xe_ggtt_node_size(node));
drm_mm_remove_node(&node->base);
node->base.size = 0;
mutex_unlock(&ggtt->lock);
@@@ -429,8 -484,10 +484,8 @@@
if (node->invalidate_on_remove)
xe_ggtt_invalidate(ggtt);
- drm_dev_exit(idx);
-
free_node:
- xe_ggtt_node_fini(node);
+ ggtt_node_fini(node);
}
static void ggtt_node_remove_work_func(struct work_struct *work)
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2026-03-23 15:55 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2026-03-23 15:55 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Linux Kernel Mailing List, Linux Next Mailing List, Matthew Auld,
Michal Wajdeczko, Nareshkumar Gollakoti, Sanjay Yadav,
Thomas Hellström
[-- Attachment #1: Type: text/plain, Size: 2304 bytes --]
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
drivers/gpu/drm/xe/xe_gt_ccs_mode.c
between commit:
65d046b2d8e0d ("drm/xe: Fix missing runtime PM reference in ccs_mode_store")
from the origin tree and commit:
9b5e995e61290 ("drm/xe: Mutual exclusivity between CCS-mode and PF")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --cc drivers/gpu/drm/xe/xe_gt_ccs_mode.c
index 03c1862ba497a,b35be36b0eaa2..0000000000000
--- a/drivers/gpu/drm/xe/xe_gt_ccs_mode.c
+++ b/drivers/gpu/drm/xe/xe_gt_ccs_mode.c
@@@ -12,8 -12,8 +12,9 @@@
#include "xe_gt_printk.h"
#include "xe_gt_sysfs.h"
#include "xe_mmio.h"
+#include "xe_pm.h"
#include "xe_sriov.h"
+ #include "xe_sriov_pf.h"
static void __xe_gt_apply_ccs_mode(struct xe_gt *gt, u32 num_engines)
{
@@@ -147,15 -145,29 +146,30 @@@ ccs_mode_store(struct device *kdev, str
return -EBUSY;
}
- if (gt->ccs_mode != num_engines) {
- xe_gt_info(gt, "Setting compute mode to %d\n", num_engines);
- gt->ccs_mode = num_engines;
- xe_gt_record_user_engines(gt);
- guard(xe_pm_runtime)(xe);
- xe_gt_reset(gt);
+ if (gt->ccs_mode == num_engines)
+ return count;
+
+ /*
+ * Changing default CCS mode is only allowed when there
+ * are no VFs. Try to lockdown PF to find out.
+ */
+ if (gt_ccs_mode_default(gt) && IS_SRIOV_PF(xe)) {
+ ret = xe_sriov_pf_lockdown(xe);
+ if (ret) {
+ xe_gt_dbg(gt, "Can't change CCS Mode: VFs are enabled\n");
+ return ret;
+ }
}
- mutex_unlock(&xe->drm.filelist_mutex);
+ xe_gt_info(gt, "Setting compute mode to %d\n", num_engines);
+ gt->ccs_mode = num_engines;
+ xe_gt_record_user_engines(gt);
++ guard(xe_pm_runtime)(xe);
+ xe_gt_reset(gt);
+
+ /* We may end PF lockdown once CCS mode is default again */
+ if (gt_ccs_mode_default(gt) && IS_SRIOV_PF(xe))
+ xe_sriov_pf_end_lockdown(xe);
return count;
}
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* linux-next: manual merge of the drm tree with the origin tree
@ 2026-03-27 20:56 Mark Brown
0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2026-03-27 20:56 UTC (permalink / raw)
To: Dave Airlie, DRI
Cc: Linux Kernel Mailing List, Linux Next Mailing List,
Maarten Lankhorst, Matthew Brost, Thomas Hellström
[-- Attachment #1: Type: text/plain, Size: 21550 bytes --]
Hi all,
Today's linux-next merge of the drm tree got a conflict in:
drivers/gpu/drm/xe/xe_ggtt.c
between commit:
01f2557aa684e5 ("drm/xe: Open-code GGTT MMIO access protection")
from the origin tree and commits:
4f3a998a173b43 ("drm/xe: Open-code GGTT MMIO access protection")
e904c56ba6e0d4 ("drm/xe: Rewrite GGTT VF initialization")
from the drm tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
diff --combined drivers/gpu/drm/xe/xe_ggtt.c
index d1561ebe4e56ca,21071b64b09dfa..00000000000000
--- a/drivers/gpu/drm/xe/xe_ggtt.c
+++ b/drivers/gpu/drm/xe/xe_ggtt.c
@@@ -66,12 -66,14 +66,14 @@@
* give us the correct placement for free.
*/
+ #define XE_GGTT_FLAGS_64K BIT(0)
+ #define XE_GGTT_FLAGS_ONLINE BIT(1)
+
/**
* struct xe_ggtt_node - A node in GGTT.
*
- * This struct needs to be initialized (only-once) with xe_ggtt_node_init() before any node
- * insertion, reservation, or 'ballooning'.
- * It will, then, be finalized by either xe_ggtt_node_remove() or xe_ggtt_node_deballoon().
+ * This struct is allocated with xe_ggtt_insert_node(,_transform) or xe_ggtt_insert_bo(,_at).
+ * It will be deallocated using xe_ggtt_node_remove().
*/
struct xe_ggtt_node {
/** @ggtt: Back pointer to xe_ggtt where this region will be inserted at */
@@@ -84,6 -86,63 +86,63 @@@
bool invalidate_on_remove;
};
+ /**
+ * struct xe_ggtt_pt_ops - GGTT Page table operations
+ * Which can vary from platform to platform.
+ */
+ struct xe_ggtt_pt_ops {
+ /** @pte_encode_flags: Encode PTE flags for a given BO */
+ u64 (*pte_encode_flags)(struct xe_bo *bo, u16 pat_index);
+
+ /** @ggtt_set_pte: Directly write into GGTT's PTE */
+ xe_ggtt_set_pte_fn ggtt_set_pte;
+
+ /** @ggtt_get_pte: Directly read from GGTT's PTE */
+ u64 (*ggtt_get_pte)(struct xe_ggtt *ggtt, u64 addr);
+ };
+
+ /**
+ * struct xe_ggtt - Main GGTT struct
+ *
+ * In general, each tile can contains its own Global Graphics Translation Table
+ * (GGTT) instance.
+ */
+ struct xe_ggtt {
+ /** @tile: Back pointer to tile where this GGTT belongs */
+ struct xe_tile *tile;
+ /** @start: Start offset of GGTT */
+ u64 start;
+ /** @size: Total usable size of this GGTT */
+ u64 size;
+
+ #define XE_GGTT_FLAGS_64K BIT(0)
+ /**
+ * @flags: Flags for this GGTT
+ * Acceptable flags:
+ * - %XE_GGTT_FLAGS_64K - if PTE size is 64K. Otherwise, regular is 4K.
+ * - %XE_GGTT_FLAGS_ONLINE - is GGTT online, protected by ggtt->lock
+ * after init
+ */
+ unsigned int flags;
+ /** @scratch: Internal object allocation used as a scratch page */
+ struct xe_bo *scratch;
+ /** @lock: Mutex lock to protect GGTT data */
+ struct mutex lock;
+ /**
+ * @gsm: The iomem pointer to the actual location of the translation
+ * table located in the GSM for easy PTE manipulation
+ */
+ u64 __iomem *gsm;
+ /** @pt_ops: Page Table operations per platform */
+ const struct xe_ggtt_pt_ops *pt_ops;
+ /** @mm: The memory manager used to manage individual GGTT allocations */
+ struct drm_mm mm;
+ /** @access_count: counts GGTT writes */
+ unsigned int access_count;
+ /** @wq: Dedicated unordered work queue to process node removals */
+ struct workqueue_struct *wq;
+ };
+
static u64 xelp_ggtt_pte_flags(struct xe_bo *bo, u16 pat_index)
{
u64 pte = XE_PAGE_PRESENT;
@@@ -193,7 -252,7 +252,7 @@@ static void xe_ggtt_set_pte_and_flush(s
static u64 xe_ggtt_get_pte(struct xe_ggtt *ggtt, u64 addr)
{
xe_tile_assert(ggtt->tile, !(addr & XE_PTE_MASK));
- xe_tile_assert(ggtt->tile, addr < ggtt->size);
+ xe_tile_assert(ggtt->tile, addr < ggtt->start + ggtt->size);
return readq(&ggtt->gsm[addr >> XE_PTE_SHIFT]);
}
@@@ -299,7 -358,7 +358,7 @@@ static void __xe_ggtt_init_early(struc
{
ggtt->start = start;
ggtt->size = size;
- drm_mm_init(&ggtt->mm, start, size);
+ drm_mm_init(&ggtt->mm, 0, size);
}
int xe_ggtt_init_kunit(struct xe_ggtt *ggtt, u32 start, u32 size)
@@@ -349,9 -408,15 +408,15 @@@ int xe_ggtt_init_early(struct xe_ggtt *
ggtt_start = wopcm;
ggtt_size = (gsm_size / 8) * (u64)XE_PAGE_SIZE - ggtt_start;
} else {
- /* GGTT is expected to be 4GiB */
- ggtt_start = wopcm;
- ggtt_size = SZ_4G - ggtt_start;
+ ggtt_start = xe_tile_sriov_vf_ggtt_base(ggtt->tile);
+ ggtt_size = xe_tile_sriov_vf_ggtt(ggtt->tile);
+
+ if (ggtt_start < wopcm ||
+ ggtt_start + ggtt_size > GUC_GGTT_TOP) {
+ xe_tile_err(ggtt->tile, "Invalid GGTT configuration: %#llx-%#llx\n",
+ ggtt_start, ggtt_start + ggtt_size - 1);
+ return -ERANGE;
+ }
}
ggtt->gsm = ggtt->tile->mmio.regs + SZ_8M;
@@@ -369,7 -434,7 +434,7 @@@
else
ggtt->pt_ops = &xelp_pt_ops;
- ggtt->wq = alloc_workqueue("xe-ggtt-wq", WQ_MEM_RECLAIM, 0);
+ ggtt->wq = alloc_workqueue("xe-ggtt-wq", WQ_MEM_RECLAIM | WQ_PERCPU, 0);
if (!ggtt->wq)
return -ENOMEM;
@@@ -380,17 -445,7 +445,7 @@@
return err;
ggtt->flags |= XE_GGTT_FLAGS_ONLINE;
- err = devm_add_action_or_reset(xe->drm.dev, dev_fini_ggtt, ggtt);
- if (err)
- return err;
-
- if (IS_SRIOV_VF(xe)) {
- err = xe_tile_sriov_vf_prepare_ggtt(ggtt->tile);
- if (err)
- return err;
- }
-
- return 0;
+ return devm_add_action_or_reset(xe->drm.dev, dev_fini_ggtt, ggtt);
}
ALLOW_ERROR_INJECTION(xe_ggtt_init_early, ERRNO); /* See xe_pci_probe() */
@@@ -404,12 -459,17 +459,17 @@@ static void xe_ggtt_initial_clear(struc
/* Display may have allocated inside ggtt, so be careful with clearing here */
mutex_lock(&ggtt->lock);
drm_mm_for_each_hole(hole, &ggtt->mm, start, end)
- xe_ggtt_clear(ggtt, start, end - start);
+ xe_ggtt_clear(ggtt, ggtt->start + start, end - start);
xe_ggtt_invalidate(ggtt);
mutex_unlock(&ggtt->lock);
}
+ static void ggtt_node_fini(struct xe_ggtt_node *node)
+ {
+ kfree(node);
+ }
+
static void ggtt_node_remove(struct xe_ggtt_node *node)
{
struct xe_ggtt *ggtt = node->ggtt;
@@@ -418,7 -478,7 +478,7 @@@
mutex_lock(&ggtt->lock);
bound = ggtt->flags & XE_GGTT_FLAGS_ONLINE;
if (bound)
- xe_ggtt_clear(ggtt, node->base.start, node->base.size);
+ xe_ggtt_clear(ggtt, xe_ggtt_node_addr(node), xe_ggtt_node_size(node));
drm_mm_remove_node(&node->base);
node->base.size = 0;
mutex_unlock(&ggtt->lock);
@@@ -430,7 -490,7 +490,7 @@@
xe_ggtt_invalidate(ggtt);
free_node:
- xe_ggtt_node_fini(node);
+ ggtt_node_fini(node);
}
static void ggtt_node_remove_work_func(struct work_struct *work)
@@@ -536,169 -596,38 +596,38 @@@ static void xe_ggtt_invalidate(struct x
ggtt_invalidate_gt_tlb(ggtt->tile->media_gt);
}
- static void xe_ggtt_dump_node(struct xe_ggtt *ggtt,
- const struct drm_mm_node *node, const char *description)
- {
- char buf[10];
-
- if (IS_ENABLED(CONFIG_DRM_XE_DEBUG)) {
- string_get_size(node->size, 1, STRING_UNITS_2, buf, sizeof(buf));
- xe_tile_dbg(ggtt->tile, "GGTT %#llx-%#llx (%s) %s\n",
- node->start, node->start + node->size, buf, description);
- }
- }
-
/**
- * xe_ggtt_node_insert_balloon_locked - prevent allocation of specified GGTT addresses
- * @node: the &xe_ggtt_node to hold reserved GGTT node
- * @start: the starting GGTT address of the reserved region
- * @end: then end GGTT address of the reserved region
- *
- * To be used in cases where ggtt->lock is already taken.
- * Use xe_ggtt_node_remove_balloon_locked() to release a reserved GGTT node.
- *
- * Return: 0 on success or a negative error code on failure.
- */
- int xe_ggtt_node_insert_balloon_locked(struct xe_ggtt_node *node, u64 start, u64 end)
- {
- struct xe_ggtt *ggtt = node->ggtt;
- int err;
-
- xe_tile_assert(ggtt->tile, start < end);
- xe_tile_assert(ggtt->tile, IS_ALIGNED(start, XE_PAGE_SIZE));
- xe_tile_assert(ggtt->tile, IS_ALIGNED(end, XE_PAGE_SIZE));
- xe_tile_assert(ggtt->tile, !drm_mm_node_allocated(&node->base));
- lockdep_assert_held(&ggtt->lock);
-
- node->base.color = 0;
- node->base.start = start;
- node->base.size = end - start;
-
- err = drm_mm_reserve_node(&ggtt->mm, &node->base);
-
- if (xe_tile_WARN(ggtt->tile, err, "Failed to balloon GGTT %#llx-%#llx (%pe)\n",
- node->base.start, node->base.start + node->base.size, ERR_PTR(err)))
- return err;
-
- xe_ggtt_dump_node(ggtt, &node->base, "balloon");
- return 0;
- }
-
- /**
- * xe_ggtt_node_remove_balloon_locked - release a reserved GGTT region
- * @node: the &xe_ggtt_node with reserved GGTT region
- *
- * To be used in cases where ggtt->lock is already taken.
- * See xe_ggtt_node_insert_balloon_locked() for details.
- */
- void xe_ggtt_node_remove_balloon_locked(struct xe_ggtt_node *node)
- {
- if (!xe_ggtt_node_allocated(node))
- return;
-
- lockdep_assert_held(&node->ggtt->lock);
-
- xe_ggtt_dump_node(node->ggtt, &node->base, "remove-balloon");
-
- drm_mm_remove_node(&node->base);
- }
-
- static void xe_ggtt_assert_fit(struct xe_ggtt *ggtt, u64 start, u64 size)
- {
- struct xe_tile *tile = ggtt->tile;
-
- xe_tile_assert(tile, start >= ggtt->start);
- xe_tile_assert(tile, start + size <= ggtt->start + ggtt->size);
- }
-
- /**
- * xe_ggtt_shift_nodes_locked - Shift GGTT nodes to adjust for a change in usable address range.
+ * xe_ggtt_shift_nodes() - Shift GGTT nodes to adjust for a change in usable address range.
* @ggtt: the &xe_ggtt struct instance
- * @shift: change to the location of area provisioned for current VF
+ * @new_start: new location of area provisioned for current VF
*
- * This function moves all nodes from the GGTT VM, to a temp list. These nodes are expected
- * to represent allocations in range formerly assigned to current VF, before the range changed.
- * When the GGTT VM is completely clear of any nodes, they are re-added with shifted offsets.
+ * Ensure that all struct &xe_ggtt_node are moved to the @new_start base address
+ * by changing the base offset of the GGTT.
*
- * The function has no ability of failing - because it shifts existing nodes, without
- * any additional processing. If the nodes were successfully existing at the old address,
- * they will do the same at the new one. A fail inside this function would indicate that
- * the list of nodes was either already damaged, or that the shift brings the address range
- * outside of valid bounds. Both cases justify an assert rather than error code.
+ * This function may be called multiple times during recovery, but if
+ * @new_start is unchanged from the current base, it's a noop.
+ *
+ * @new_start should be a value between xe_wopcm_size() and #GUC_GGTT_TOP.
*/
- void xe_ggtt_shift_nodes_locked(struct xe_ggtt *ggtt, s64 shift)
+ void xe_ggtt_shift_nodes(struct xe_ggtt *ggtt, u64 new_start)
{
- struct xe_tile *tile __maybe_unused = ggtt->tile;
- struct drm_mm_node *node, *tmpn;
- LIST_HEAD(temp_list_head);
+ guard(mutex)(&ggtt->lock);
- lockdep_assert_held(&ggtt->lock);
+ xe_tile_assert(ggtt->tile, new_start >= xe_wopcm_size(tile_to_xe(ggtt->tile)));
+ xe_tile_assert(ggtt->tile, new_start + ggtt->size <= GUC_GGTT_TOP);
- if (IS_ENABLED(CONFIG_DRM_XE_DEBUG))
- drm_mm_for_each_node_safe(node, tmpn, &ggtt->mm)
- xe_ggtt_assert_fit(ggtt, node->start + shift, node->size);
-
- drm_mm_for_each_node_safe(node, tmpn, &ggtt->mm) {
- drm_mm_remove_node(node);
- list_add(&node->node_list, &temp_list_head);
- }
-
- list_for_each_entry_safe(node, tmpn, &temp_list_head, node_list) {
- list_del(&node->node_list);
- node->start += shift;
- drm_mm_reserve_node(&ggtt->mm, node);
- xe_tile_assert(tile, drm_mm_node_allocated(node));
- }
+ /* pairs with READ_ONCE in xe_ggtt_node_addr() */
+ WRITE_ONCE(ggtt->start, new_start);
}
- static int xe_ggtt_node_insert_locked(struct xe_ggtt_node *node,
+ static int xe_ggtt_insert_node_locked(struct xe_ggtt_node *node,
u32 size, u32 align, u32 mm_flags)
{
return drm_mm_insert_node_generic(&node->ggtt->mm, &node->base, size, align, 0,
mm_flags);
}
- /**
- * xe_ggtt_node_insert - Insert a &xe_ggtt_node into the GGTT
- * @node: the &xe_ggtt_node to be inserted
- * @size: size of the node
- * @align: alignment constrain of the node
- *
- * It cannot be called without first having called xe_ggtt_init() once.
- *
- * Return: 0 on success or a negative error code on failure.
- */
- int xe_ggtt_node_insert(struct xe_ggtt_node *node, u32 size, u32 align)
- {
- int ret;
-
- if (!node || !node->ggtt)
- return -ENOENT;
-
- mutex_lock(&node->ggtt->lock);
- ret = xe_ggtt_node_insert_locked(node, size, align,
- DRM_MM_INSERT_HIGH);
- mutex_unlock(&node->ggtt->lock);
-
- return ret;
- }
-
- /**
- * xe_ggtt_node_init - Initialize %xe_ggtt_node struct
- * @ggtt: the &xe_ggtt where the new node will later be inserted/reserved.
- *
- * This function will allocate the struct %xe_ggtt_node and return its pointer.
- * This struct will then be freed after the node removal upon xe_ggtt_node_remove()
- * or xe_ggtt_node_remove_balloon_locked().
- *
- * Having %xe_ggtt_node struct allocated doesn't mean that the node is already
- * allocated in GGTT. Only xe_ggtt_node_insert(), allocation through
- * xe_ggtt_node_insert_transform(), or xe_ggtt_node_insert_balloon_locked() will ensure the node is inserted or reserved
- * in GGTT.
- *
- * Return: A pointer to %xe_ggtt_node struct on success. An ERR_PTR otherwise.
- **/
- struct xe_ggtt_node *xe_ggtt_node_init(struct xe_ggtt *ggtt)
+ static struct xe_ggtt_node *ggtt_node_init(struct xe_ggtt *ggtt)
{
struct xe_ggtt_node *node = kzalloc_obj(*node, GFP_NOFS);
@@@ -712,30 -641,31 +641,31 @@@
}
/**
- * xe_ggtt_node_fini - Forcebly finalize %xe_ggtt_node struct
- * @node: the &xe_ggtt_node to be freed
+ * xe_ggtt_insert_node - Insert a &xe_ggtt_node into the GGTT
+ * @ggtt: the &xe_ggtt into which the node should be inserted.
+ * @size: size of the node
+ * @align: alignment constrain of the node
*
- * If anything went wrong with either xe_ggtt_node_insert(), xe_ggtt_node_insert_locked(),
- * or xe_ggtt_node_insert_balloon_locked(); and this @node is not going to be reused, then,
- * this function needs to be called to free the %xe_ggtt_node struct
- **/
- void xe_ggtt_node_fini(struct xe_ggtt_node *node)
- {
- kfree(node);
- }
-
- /**
- * xe_ggtt_node_allocated - Check if node is allocated in GGTT
- * @node: the &xe_ggtt_node to be inspected
- *
- * Return: True if allocated, False otherwise.
+ * Return: &xe_ggtt_node on success or a ERR_PTR on failure.
*/
- bool xe_ggtt_node_allocated(const struct xe_ggtt_node *node)
+ struct xe_ggtt_node *xe_ggtt_insert_node(struct xe_ggtt *ggtt, u32 size, u32 align)
{
- if (!node || !node->ggtt)
- return false;
+ struct xe_ggtt_node *node;
+ int ret;
- return drm_mm_node_allocated(&node->base);
+ node = ggtt_node_init(ggtt);
+ if (IS_ERR(node))
+ return node;
+
+ guard(mutex)(&ggtt->lock);
+ ret = xe_ggtt_insert_node_locked(node, size, align,
+ DRM_MM_INSERT_HIGH);
+ if (ret) {
+ ggtt_node_fini(node);
+ return ERR_PTR(ret);
+ }
+
+ return node;
}
/**
@@@ -768,7 -698,7 +698,7 @@@ static void xe_ggtt_map_bo(struct xe_gg
if (XE_WARN_ON(!node))
return;
- start = node->base.start;
+ start = xe_ggtt_node_addr(node);
end = start + xe_bo_size(bo);
if (!xe_bo_is_vram(bo) && !xe_bo_is_stolen(bo)) {
@@@ -809,7 -739,7 +739,7 @@@ void xe_ggtt_map_bo_unlocked(struct xe_
}
/**
- * xe_ggtt_node_insert_transform - Insert a newly allocated &xe_ggtt_node into the GGTT
+ * xe_ggtt_insert_node_transform - Insert a newly allocated &xe_ggtt_node into the GGTT
* @ggtt: the &xe_ggtt where the node will inserted/reserved.
* @bo: The bo to be transformed
* @pte_flags: The extra GGTT flags to add to mapping.
@@@ -823,7 -753,7 +753,7 @@@
*
* Return: A pointer to %xe_ggtt_node struct on success. An ERR_PTR otherwise.
*/
- struct xe_ggtt_node *xe_ggtt_node_insert_transform(struct xe_ggtt *ggtt,
+ struct xe_ggtt_node *xe_ggtt_insert_node_transform(struct xe_ggtt *ggtt,
struct xe_bo *bo, u64 pte_flags,
u64 size, u32 align,
xe_ggtt_transform_cb transform, void *arg)
@@@ -831,7 -761,7 +761,7 @@@
struct xe_ggtt_node *node;
int ret;
- node = xe_ggtt_node_init(ggtt);
+ node = ggtt_node_init(ggtt);
if (IS_ERR(node))
return ERR_CAST(node);
@@@ -840,7 -770,7 +770,7 @@@
goto err;
}
- ret = xe_ggtt_node_insert_locked(node, size, align, 0);
+ ret = xe_ggtt_insert_node_locked(node, size, align, 0);
if (ret)
goto err_unlock;
@@@ -855,7 -785,7 +785,7 @@@
err_unlock:
mutex_unlock(&ggtt->lock);
err:
- xe_ggtt_node_fini(node);
+ ggtt_node_fini(node);
return ERR_PTR(ret);
}
@@@ -881,7 -811,7 +811,7 @@@ static int __xe_ggtt_insert_bo_at(struc
xe_pm_runtime_get_noresume(tile_to_xe(ggtt->tile));
- bo->ggtt_node[tile_id] = xe_ggtt_node_init(ggtt);
+ bo->ggtt_node[tile_id] = ggtt_node_init(ggtt);
if (IS_ERR(bo->ggtt_node[tile_id])) {
err = PTR_ERR(bo->ggtt_node[tile_id]);
bo->ggtt_node[tile_id] = NULL;
@@@ -889,10 -819,30 +819,30 @@@
}
mutex_lock(&ggtt->lock);
+ /*
+ * When inheriting the initial framebuffer, the framebuffer is
+ * physically located at VRAM address 0, and usually at GGTT address 0 too.
+ *
+ * The display code will ask for a GGTT allocation between end of BO and
+ * remainder of GGTT, unaware that the start is reserved by WOPCM.
+ */
+ if (start >= ggtt->start)
+ start -= ggtt->start;
+ else
+ start = 0;
+
+ /* Should never happen, but since we handle start, fail graciously for end */
+ if (end >= ggtt->start)
+ end -= ggtt->start;
+ else
+ end = 0;
+
+ xe_tile_assert(ggtt->tile, end >= start + xe_bo_size(bo));
+
err = drm_mm_insert_node_in_range(&ggtt->mm, &bo->ggtt_node[tile_id]->base,
xe_bo_size(bo), alignment, 0, start, end, 0);
if (err) {
- xe_ggtt_node_fini(bo->ggtt_node[tile_id]);
+ ggtt_node_fini(bo->ggtt_node[tile_id]);
bo->ggtt_node[tile_id] = NULL;
} else {
u16 cache_mode = bo->flags & XE_BO_FLAG_NEEDS_UC ? XE_CACHE_NONE : XE_CACHE_WB;
@@@ -1000,18 -950,16 +950,16 @@@ static u64 xe_encode_vfid_pte(u16 vfid
return FIELD_PREP(GGTT_PTE_VFID, vfid) | XE_PAGE_PRESENT;
}
- static void xe_ggtt_assign_locked(struct xe_ggtt *ggtt, const struct drm_mm_node *node, u16 vfid)
+ static void xe_ggtt_assign_locked(const struct xe_ggtt_node *node, u16 vfid)
{
- u64 start = node->start;
- u64 size = node->size;
+ struct xe_ggtt *ggtt = node->ggtt;
+ u64 start = xe_ggtt_node_addr(node);
+ u64 size = xe_ggtt_node_size(node);
u64 end = start + size - 1;
u64 pte = xe_encode_vfid_pte(vfid);
lockdep_assert_held(&ggtt->lock);
- if (!drm_mm_node_allocated(node))
- return;
-
while (start < end) {
ggtt->pt_ops->ggtt_set_pte(ggtt, start, pte);
start += XE_PAGE_SIZE;
@@@ -1031,9 -979,8 +979,8 @@@
*/
void xe_ggtt_assign(const struct xe_ggtt_node *node, u16 vfid)
{
- mutex_lock(&node->ggtt->lock);
- xe_ggtt_assign_locked(node->ggtt, &node->base, vfid);
- mutex_unlock(&node->ggtt->lock);
+ guard(mutex)(&node->ggtt->lock);
+ xe_ggtt_assign_locked(node, vfid);
}
/**
@@@ -1055,14 -1002,14 +1002,14 @@@ int xe_ggtt_node_save(struct xe_ggtt_no
if (!node)
return -ENOENT;
- guard(mutex)(&node->ggtt->lock);
+ ggtt = node->ggtt;
+ guard(mutex)(&ggtt->lock);
if (xe_ggtt_node_pt_size(node) != size)
return -EINVAL;
- ggtt = node->ggtt;
- start = node->base.start;
- end = start + node->base.size - 1;
+ start = xe_ggtt_node_addr(node);
+ end = start + xe_ggtt_node_size(node) - 1;
while (start < end) {
pte = ggtt->pt_ops->ggtt_get_pte(ggtt, start);
@@@ -1095,14 -1042,14 +1042,14 @@@ int xe_ggtt_node_load(struct xe_ggtt_no
if (!node)
return -ENOENT;
- guard(mutex)(&node->ggtt->lock);
+ ggtt = node->ggtt;
+ guard(mutex)(&ggtt->lock);
if (xe_ggtt_node_pt_size(node) != size)
return -EINVAL;
- ggtt = node->ggtt;
- start = node->base.start;
- end = start + node->base.size - 1;
+ start = xe_ggtt_node_addr(node);
+ end = start + xe_ggtt_node_size(node) - 1;
while (start < end) {
vfid_pte = u64_replace_bits(*buf++, vfid, GGTT_PTE_VFID);
@@@ -1209,7 -1156,8 +1156,8 @@@ u64 xe_ggtt_read_pte(struct xe_ggtt *gg
*/
u64 xe_ggtt_node_addr(const struct xe_ggtt_node *node)
{
- return node->base.start;
+ /* pairs with WRITE_ONCE in xe_ggtt_shift_nodes() */
+ return node->base.start + READ_ONCE(node->ggtt->start);
}
/**
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2026-03-27 20:56 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-23 15:49 linux-next: manual merge of the drm tree with the origin tree Mark Brown
-- strict thread matches above, loose matches on Subject: below --
2026-03-27 20:56 Mark Brown
2026-03-23 15:55 Mark Brown
2026-03-23 15:55 Mark Brown
2026-03-23 15:01 Mark Brown
2026-02-08 22:47 Mark Brown
2026-02-08 22:46 Mark Brown
2026-02-02 14:29 Mark Brown
2026-01-19 17:03 Mark Brown
2026-01-19 16:53 Mark Brown
2025-10-02 12:07 Mark Brown
2025-10-02 12:05 Mark Brown
2025-10-02 12:30 ` Danilo Krummrich
2025-09-26 12:38 Mark Brown
2024-06-28 16:51 Mark Brown
2024-06-27 15:06 Mark Brown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox