* [PATCH v9 00/23] gpu: nova-core: Add memory management support
@ 2026-03-11 0:39 Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 01/23] gpu: nova-core: Select GPU_BUDDY for VRAM allocation Joel Fernandes
` (22 more replies)
0 siblings, 23 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
This series adds support for nova-core memory management, including VRAM
allocation, PRAMIN, VMM, page table walking, and BAR 1 read/writes.
These are critical for channel management, vGPU, and all other memory
management uses of nova-core.
It is based on linux -next and all patches, along with all the dependencies
(such as buddy bindings, CList), can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (tag: nova-mm-v9-20260310)
Change log:
Changes from v8 to v9:
- Added fixes from Zhi Wang for bitfield position changes in virtual address
and larger BAR1 size on some platforms. Tested and working for vGPU usecase!
- Refactored gsp: boot() to return only GspStaticInfo, removing FbLayout (Alex)
- bar1_pde_base and bar2_pde_base are now accessed via GspStaticInfo directly (Alex)
- Added new patch "gsp: Expose total physical VRAM end from FB region info"
introducing total_fb_end() to expose VRAM extent. (Alex)
- Consolidated usable VRAM and BarUser setup; removed dedicated
"fb: Add usable_vram field to FbLayout", "mm: Use usable VRAM region for
buddy allocator", and "mm: Add BarUser to struct Gpu and create at boot".
Changes from v7 to v8:
- Incorporated "Select GPU_BUDDY for VRAM allocation" patch from the
dependency series (Alex).
- Significant patch reordering for better logical flow (GSP/FB patches
moved earlier, page table patches, Vmm, Bar1, tests) (Alex).
- Replaced several 'as' usages with into_safe_cast() (Danilo, Alex).
- Updated BAR 1 test cases to include exercising the block size API (Eliot, Danilo).
Changes from v6 to v7:
- Addressed DMA fence signalling usecase per Danilo's feedback.
Pre v6:
- Simplified PRAMIN code (John Hubbard, Alex Courbot).
- Handled different MMU versions: ver2 versus ver3 (John Hubbard).
- Added BAR1 usecase so we have user of DRM Buddy / VMM (John Hubbard).
- Iterating over clist/buddy bindings.
Link to v8: https://lore.kernel.org/all/20260224225323.3312204-1-joelagnelf@nvidia.com/
Link to v7: https://lore.kernel.org/all/20260218212020.800836-1-joelagnelf@nvidia.com/
Joel Fernandes (24):
gpu: nova-core: Select GPU_BUDDY for VRAM allocation
gpu: nova-core: Kconfig: Sort select statements alphabetically
gpu: nova-core: gsp: Return GspStaticInfo from boot()
gpu: nova-core: gsp: Extract usable FB region from GSP
gpu: nova-core: gsp: Expose total physical VRAM end from FB region
info
gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM
docs: gpu: nova-core: Document the PRAMIN aperture mechanism
gpu: nova-core: mm: Add common memory management types
gpu: nova-core: mm: Add TLB flush support
gpu: nova-core: mm: Add GpuMm centralized memory manager
gpu: nova-core: mm: Add common types for all page table formats
gpu: nova-core: mm: Add MMU v2 page table types
gpu: nova-core: mm: Add MMU v3 page table types
gpu: nova-core: mm: Add unified page table entry wrapper enums
gpu: nova-core: mm: Add page table walker for MMU v2/v3
gpu: nova-core: mm: Add Virtual Memory Manager
gpu: nova-core: mm: Add virtual address range tracking to VMM
gpu: nova-core: mm: Add multi-page mapping API to VMM
gpu: nova-core: Add BAR1 aperture type and size constant
gpu: nova-core: mm: Add BAR1 user interface
gpu: nova-core: mm: Add BAR1 memory management self-tests
gpu: nova-core: mm: Add PRAMIN aperture self-tests
Zhi Wang (1):
gpu: nova-core: Use runtime BAR1 size instead of hardcoded 256MB
Documentation/gpu/nova/core/pramin.rst | 125 +++++
Documentation/gpu/nova/index.rst | 1 +
MAINTAINERS | 14 +-
drivers/gpu/nova-core/Kconfig | 13 +-
drivers/gpu/nova-core/driver.rs | 3 +
drivers/gpu/nova-core/gpu.rs | 90 ++-
drivers/gpu/nova-core/gsp/boot.rs | 12 +-
drivers/gpu/nova-core/gsp/commands.rs | 18 +-
drivers/gpu/nova-core/gsp/fw/commands.rs | 59 ++
drivers/gpu/nova-core/mm.rs | 235 ++++++++
drivers/gpu/nova-core/mm/bar_user.rs | 412 ++++++++++++++
drivers/gpu/nova-core/mm/pagetable.rs | 481 ++++++++++++++++
drivers/gpu/nova-core/mm/pagetable/ver2.rs | 232 ++++++++
drivers/gpu/nova-core/mm/pagetable/ver3.rs | 337 ++++++++++++
drivers/gpu/nova-core/mm/pagetable/walk.rs | 218 ++++++++
drivers/gpu/nova-core/mm/pramin.rs | 502 +++++++++++++++++
drivers/gpu/nova-core/mm/tlb.rs | 90 +++
drivers/gpu/nova-core/mm/vmm.rs | 498 +++++++++++++++++
drivers/gpu/nova-core/nova_core.rs | 1 +
drivers/gpu/nova-core/regs.rs | 39 ++
29 files changed, 4390 insertions(+), 10 deletions(-)
create mode 100644 Documentation/gpu/nova/core/pramin.rst
create mode 100644 drivers/gpu/nova-core/mm.rs
create mode 100644 drivers/gpu/nova-core/mm/bar_user.rs
create mode 100644 drivers/gpu/nova-core/mm/pagetable.rs
create mode 100644 drivers/gpu/nova-core/mm/pagetable/ver2.rs
create mode 100644 drivers/gpu/nova-core/mm/pagetable/ver3.rs
create mode 100644 drivers/gpu/nova-core/mm/pagetable/walk.rs
create mode 100644 drivers/gpu/nova-core/mm/pramin.rs
create mode 100644 drivers/gpu/nova-core/mm/tlb.rs
create mode 100644 drivers/gpu/nova-core/mm/vmm.rs
--
2.34.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH v9 01/23] gpu: nova-core: Select GPU_BUDDY for VRAM allocation
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-12 6:34 ` Eliot Courtney
2026-03-16 13:17 ` Alexandre Courbot
2026-03-11 0:39 ` [PATCH v9 02/23] gpu: nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
` (21 subsequent siblings)
22 siblings, 2 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
nova-core will use the GPU buddy allocator for physical VRAM management.
Enable it in Kconfig.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
index 527920f9c4d3..809485167aff 100644
--- a/drivers/gpu/nova-core/Kconfig
+++ b/drivers/gpu/nova-core/Kconfig
@@ -5,6 +5,7 @@ config NOVA_CORE
depends on RUST
select RUST_FW_LOADER_ABSTRACTIONS
select AUXILIARY_BUS
+ select GPU_BUDDY
default n
help
Choose this if you want to build the Nova Core driver for Nvidia
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 02/23] gpu: nova-core: Kconfig: Sort select statements alphabetically
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 01/23] gpu: nova-core: Select GPU_BUDDY for VRAM allocation Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-12 6:35 ` Eliot Courtney
2026-03-16 13:17 ` Alexandre Courbot
2026-03-11 0:39 ` [PATCH v9 03/23] gpu: nova-core: gsp: Return GspStaticInfo from boot() Joel Fernandes
` (20 subsequent siblings)
22 siblings, 2 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Reorder the select statements in NOVA_CORE Kconfig to be in
alphabetical order.
Suggested-by: Danilo Krummrich <dakr@kernel.org>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
index 809485167aff..6513007bf66f 100644
--- a/drivers/gpu/nova-core/Kconfig
+++ b/drivers/gpu/nova-core/Kconfig
@@ -3,9 +3,9 @@ config NOVA_CORE
depends on 64BIT
depends on PCI
depends on RUST
- select RUST_FW_LOADER_ABSTRACTIONS
select AUXILIARY_BUS
select GPU_BUDDY
+ select RUST_FW_LOADER_ABSTRACTIONS
default n
help
Choose this if you want to build the Nova Core driver for Nvidia
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 03/23] gpu: nova-core: gsp: Return GspStaticInfo from boot()
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 01/23] gpu: nova-core: Select GPU_BUDDY for VRAM allocation Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 02/23] gpu: nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-12 6:37 ` Eliot Courtney
2026-03-11 0:39 ` [PATCH v9 04/23] gpu: nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
` (19 subsequent siblings)
22 siblings, 1 reply; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Refactor the GSP boot function to return only the GspStaticInfo,
removing the FbLayout from the return tuple.
This enables access required for memory management initialization to:
- bar1_pde_base: BAR1 page directory base.
- bar2_pde_base: BAR2 page directory base.
- usable memory regions in vidmem.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gpu.rs | 9 +++++++--
drivers/gpu/nova-core/gsp/boot.rs | 12 ++++++++----
2 files changed, 15 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index 60c85fffaeaf..c324d96bd0c6 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -18,7 +18,10 @@
},
fb::SysmemFlush,
gfw,
- gsp::Gsp,
+ gsp::{
+ commands::GetGspStaticInfoReply,
+ Gsp, //
+ },
regs,
};
@@ -252,6 +255,8 @@ pub(crate) struct Gpu {
/// GSP runtime data. Temporarily an empty placeholder.
#[pin]
gsp: Gsp,
+ /// Static GPU information from GSP.
+ gsp_static_info: GetGspStaticInfoReply,
}
impl Gpu {
@@ -283,7 +288,7 @@ pub(crate) fn new<'a>(
gsp <- Gsp::new(pdev),
- _: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)? },
+ gsp_static_info: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)? },
bar: devres_bar,
})
diff --git a/drivers/gpu/nova-core/gsp/boot.rs b/drivers/gpu/nova-core/gsp/boot.rs
index c56029f444cb..73a711f03044 100644
--- a/drivers/gpu/nova-core/gsp/boot.rs
+++ b/drivers/gpu/nova-core/gsp/boot.rs
@@ -32,7 +32,10 @@
},
gpu::Chipset,
gsp::{
- commands,
+ commands::{
+ self,
+ GetGspStaticInfoReply, //
+ },
sequencer::{
GspSequencer,
GspSequencerParams, //
@@ -126,7 +129,8 @@ fn run_fwsec_frts(
/// user-space, patching them with signatures, and building firmware-specific intricate data
/// structures that the GSP will use at runtime.
///
- /// Upon return, the GSP is up and running, and its runtime object given as return value.
+ /// Upon return, the GSP is up and running, and static GPU information is returned.
+ ///
pub(crate) fn boot(
mut self: Pin<&mut Self>,
pdev: &pci::Device<device::Bound>,
@@ -134,7 +138,7 @@ pub(crate) fn boot(
chipset: Chipset,
gsp_falcon: &Falcon<Gsp>,
sec2_falcon: &Falcon<Sec2>,
- ) -> Result {
+ ) -> Result<GetGspStaticInfoReply> {
let dev = pdev.as_ref();
let bios = Vbios::new(dev, bar)?;
@@ -225,6 +229,6 @@ pub(crate) fn boot(
Err(e) => dev_warn!(pdev, "GPU name unavailable: {:?}\n", e),
}
- Ok(())
+ Ok(info)
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 04/23] gpu: nova-core: gsp: Extract usable FB region from GSP
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (2 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 03/23] gpu: nova-core: gsp: Return GspStaticInfo from boot() Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-13 6:58 ` Eliot Courtney
2026-03-16 13:18 ` Alexandre Courbot
2026-03-11 0:39 ` [PATCH v9 05/23] gpu: nova-core: gsp: Expose total physical VRAM end from FB region info Joel Fernandes
` (18 subsequent siblings)
22 siblings, 2 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add first_usable_fb_region() to GspStaticConfigInfo to extract the first
usable FB region from GSP's fbRegionInfoParams. Usable regions are those
that are not reserved or protected.
The extracted region is stored in GetGspStaticInfoReply and exposed via
usable_fb_region() API for use by the memory subsystem.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gsp/commands.rs | 11 ++++++--
drivers/gpu/nova-core/gsp/fw/commands.rs | 32 ++++++++++++++++++++++++
2 files changed, 41 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index 8f270eca33be..8d5780d9cace 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -4,6 +4,7 @@
array,
convert::Infallible,
ffi::FromBytesUntilNulError,
+ ops::Range,
str::Utf8Error, //
};
@@ -186,22 +187,28 @@ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
}
}
-/// The reply from the GSP to the [`GetGspInfo`] command.
+/// The reply from the GSP to the [`GetGspStaticInfo`] command.
pub(crate) struct GetGspStaticInfoReply {
gpu_name: [u8; 64],
+ /// Usable FB (VRAM) region for driver memory allocation.
+ #[expect(dead_code)]
+ pub(crate) usable_fb_region: Range<u64>,
}
impl MessageFromGsp for GetGspStaticInfoReply {
const FUNCTION: MsgFunction = MsgFunction::GetGspStaticInfo;
type Message = GspStaticConfigInfo;
- type InitError = Infallible;
+ type InitError = Error;
fn read(
msg: &Self::Message,
_sbuffer: &mut SBufferIter<array::IntoIter<&[u8], 2>>,
) -> Result<Self, Self::InitError> {
+ let (base, size) = msg.first_usable_fb_region().ok_or(ENODEV)?;
+
Ok(GetGspStaticInfoReply {
gpu_name: msg.gpu_name_str(),
+ usable_fb_region: base..base.saturating_add(size),
})
}
}
diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs b/drivers/gpu/nova-core/gsp/fw/commands.rs
index 67f44421fcc3..cef86cab8a12 100644
--- a/drivers/gpu/nova-core/gsp/fw/commands.rs
+++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
@@ -5,6 +5,7 @@
use kernel::{device, pci};
use crate::gsp::GSP_PAGE_SIZE;
+use crate::num::IntoSafeCast;
use super::bindings;
@@ -115,6 +116,37 @@ impl GspStaticConfigInfo {
pub(crate) fn gpu_name_str(&self) -> [u8; 64] {
self.0.gpuNameString
}
+
+ /// Extract the first usable FB region from GSP firmware data.
+ ///
+ /// Returns the first region suitable for driver memory allocation as a `(base, size)` tuple.
+ /// Usable regions are those that:
+ /// - Are not reserved for firmware internal use.
+ /// - Are not protected (hardware-enforced access restrictions).
+ /// - Support compression (can use GPU memory compression for bandwidth).
+ /// - Support ISO (isochronous memory for display requiring guaranteed bandwidth).
+ pub(crate) fn first_usable_fb_region(&self) -> Option<(u64, u64)> {
+ let fb_info = &self.0.fbRegionInfoParams;
+ for i in 0..fb_info.numFBRegions.into_safe_cast() {
+ if let Some(reg) = fb_info.fbRegion.get(i) {
+ // Skip malformed regions where limit < base.
+ if reg.limit < reg.base {
+ continue;
+ }
+
+ // Filter: not reserved, not protected, supports compression and ISO.
+ if reg.reserved == 0
+ && reg.bProtected == 0
+ && reg.supportCompressed != 0
+ && reg.supportISO != 0
+ {
+ let size = reg.limit - reg.base + 1;
+ return Some((reg.base, size));
+ }
+ }
+ }
+ None
+ }
}
// SAFETY: Padding is explicit and will not contain uninitialized data.
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 05/23] gpu: nova-core: gsp: Expose total physical VRAM end from FB region info
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (3 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 04/23] gpu: nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-16 13:19 ` Alexandre Courbot
2026-03-11 0:39 ` [PATCH v9 06/23] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM Joel Fernandes
` (17 subsequent siblings)
22 siblings, 1 reply; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add `total_fb_end()` to `GspStaticConfigInfo` that computes the exclusive end
address of the highest valid FB region covering both usable and GSP-reserved
areas.
This allows callers to know the full physical VRAM extent, not just the
allocatable portion.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gsp/commands.rs | 6 ++++++
drivers/gpu/nova-core/gsp/fw/commands.rs | 19 +++++++++++++++++++
2 files changed, 25 insertions(+)
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index 8d5780d9cace..389d215098c6 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -193,6 +193,9 @@ pub(crate) struct GetGspStaticInfoReply {
/// Usable FB (VRAM) region for driver memory allocation.
#[expect(dead_code)]
pub(crate) usable_fb_region: Range<u64>,
+ /// End of VRAM.
+ #[expect(dead_code)]
+ pub(crate) total_fb_end: u64,
}
impl MessageFromGsp for GetGspStaticInfoReply {
@@ -206,9 +209,12 @@ fn read(
) -> Result<Self, Self::InitError> {
let (base, size) = msg.first_usable_fb_region().ok_or(ENODEV)?;
+ let total_fb_end = msg.total_fb_end().ok_or(ENODEV)?;
+
Ok(GetGspStaticInfoReply {
gpu_name: msg.gpu_name_str(),
usable_fb_region: base..base.saturating_add(size),
+ total_fb_end,
})
}
}
diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs b/drivers/gpu/nova-core/gsp/fw/commands.rs
index cef86cab8a12..acaf92cd6735 100644
--- a/drivers/gpu/nova-core/gsp/fw/commands.rs
+++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
@@ -147,6 +147,25 @@ pub(crate) fn first_usable_fb_region(&self) -> Option<(u64, u64)> {
}
None
}
+
+ /// Compute the end of physical VRAM from all FB regions.
+ pub(crate) fn total_fb_end(&self) -> Option<u64> {
+ let fb_info = &self.0.fbRegionInfoParams;
+ let mut max_end: Option<u64> = None;
+ for i in 0..fb_info.numFBRegions.into_safe_cast() {
+ if let Some(reg) = fb_info.fbRegion.get(i) {
+ if reg.limit < reg.base {
+ continue;
+ }
+ let end = reg.limit.saturating_add(1);
+ max_end = Some(match max_end {
+ None => end,
+ Some(prev) => prev.max(end),
+ });
+ }
+ }
+ max_end
+ }
}
// SAFETY: Padding is explicit and will not contain uninitialized data.
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 06/23] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (4 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 05/23] gpu: nova-core: gsp: Expose total physical VRAM end from FB region info Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 07/23] docs: gpu: nova-core: Document the PRAMIN aperture mechanism Joel Fernandes
` (16 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
PRAMIN apertures are a crucial mechanism to direct read/write to VRAM.
Add support for the same.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 5 +
drivers/gpu/nova-core/mm/pramin.rs | 293 +++++++++++++++++++++++++++++
drivers/gpu/nova-core/nova_core.rs | 1 +
drivers/gpu/nova-core/regs.rs | 6 +
4 files changed, 305 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm.rs
create mode 100644 drivers/gpu/nova-core/mm/pramin.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
new file mode 100644
index 000000000000..7a5dd4220c67
--- /dev/null
+++ b/drivers/gpu/nova-core/mm.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Memory management subsystems for nova-core.
+
+pub(crate) mod pramin;
diff --git a/drivers/gpu/nova-core/mm/pramin.rs b/drivers/gpu/nova-core/mm/pramin.rs
new file mode 100644
index 000000000000..707794f49add
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pramin.rs
@@ -0,0 +1,293 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Direct VRAM access through the PRAMIN aperture.
+//!
+//! PRAMIN provides a 1MB sliding window into VRAM through BAR0, allowing the CPU to access
+//! video memory directly. Access is managed through a two-level API:
+//!
+//! - [`Pramin`]: The parent object that owns the BAR0 reference and synchronization lock.
+//! - [`PraminWindow`]: A guard object that holds exclusive PRAMIN access for its lifetime.
+//!
+//! The PRAMIN aperture is a 1MB region at BAR0 + 0x700000 for all GPUs. The window base is
+//! controlled by the `NV_PBUS_BAR0_WINDOW` register and is 64KB aligned.
+//!
+//! # Examples
+//!
+//! ## Basic read/write
+//!
+//! ```no_run
+//! use crate::driver::Bar0;
+//! use crate::mm::pramin;
+//! use kernel::devres::Devres;
+//! use kernel::prelude::*;
+//! use kernel::sync::Arc;
+//!
+//! fn example(devres_bar: Arc<Devres<Bar0>>, vram_region: core::ops::Range<u64>) -> Result<()> {
+//! let pramin = Arc::pin_init(pramin::Pramin::new(devres_bar, vram_region)?, GFP_KERNEL)?;
+//! let mut window = pramin.window()?;
+//!
+//! // Write and read back.
+//! window.try_write32(0x100, 0xDEADBEEF)?;
+//! let val = window.try_read32(0x100)?;
+//! assert_eq!(val, 0xDEADBEEF);
+//!
+//! Ok(())
+//! }
+//! ```
+//!
+//! ## Auto-repositioning across VRAM regions
+//!
+//! ```no_run
+//! use crate::driver::Bar0;
+//! use crate::mm::pramin;
+//! use kernel::devres::Devres;
+//! use kernel::prelude::*;
+//! use kernel::sync::Arc;
+//!
+//! fn example(devres_bar: Arc<Devres<Bar0>>, vram_region: core::ops::Range<u64>) -> Result<()> {
+//! let pramin = Arc::pin_init(pramin::Pramin::new(devres_bar, vram_region)?, GFP_KERNEL)?;
+//! let mut window = pramin.window()?;
+//!
+//! // Access first 1MB region.
+//! window.try_write32(0x100, 0x11111111)?;
+//!
+//! // Access at 2MB - window auto-repositions.
+//! window.try_write32(0x200000, 0x22222222)?;
+//!
+//! // Back to first region - window repositions again.
+//! let val = window.try_read32(0x100)?;
+//! assert_eq!(val, 0x11111111);
+//!
+//! Ok(())
+//! }
+//! ```
+
+#![expect(unused)]
+
+use core::ops::Range;
+
+use crate::{
+ driver::Bar0,
+ num::IntoSafeCast,
+ regs, //
+};
+
+use kernel::{
+ devres::Devres,
+ io::Io,
+ new_mutex,
+ prelude::*,
+ revocable::RevocableGuard,
+ sizes::{
+ SZ_1M,
+ SZ_64K, //
+ },
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+};
+
+/// Target memory type for the BAR0 window register.
+///
+/// Only VRAM is supported; Hopper+ GPUs do not support other targets.
+#[repr(u8)]
+#[derive(Debug, Default)]
+pub(crate) enum Bar0WindowTarget {
+ /// Video RAM (GPU framebuffer memory).
+ #[default]
+ Vram = 0,
+}
+
+impl From<Bar0WindowTarget> for u8 {
+ fn from(value: Bar0WindowTarget) -> Self {
+ value as u8
+ }
+}
+
+impl TryFrom<u8> for Bar0WindowTarget {
+ type Error = Error;
+
+ fn try_from(value: u8) -> Result<Self> {
+ match value {
+ 0 => Ok(Self::Vram),
+ _ => Err(EINVAL),
+ }
+ }
+}
+
+/// PRAMIN aperture base offset in BAR0.
+const PRAMIN_BASE: usize = 0x700000;
+
+/// PRAMIN aperture size (1MB).
+const PRAMIN_SIZE: usize = SZ_1M;
+
+/// Generate a PRAMIN read accessor.
+macro_rules! define_pramin_read {
+ ($name:ident, $ty:ty) => {
+ #[doc = concat!("Read a `", stringify!($ty), "` from VRAM at the given offset.")]
+ pub(crate) fn $name(&mut self, vram_offset: usize) -> Result<$ty> {
+ let (bar_offset, new_base) =
+ self.compute_window(vram_offset, ::core::mem::size_of::<$ty>())?;
+
+ if let Some(base) = new_base {
+ Self::write_window_base(&self.bar, base);
+ *self.state = base;
+ }
+ self.bar.$name(bar_offset)
+ }
+ };
+}
+
+/// Generate a PRAMIN write accessor.
+macro_rules! define_pramin_write {
+ ($name:ident, $ty:ty) => {
+ #[doc = concat!("Write a `", stringify!($ty), "` to VRAM at the given offset.")]
+ pub(crate) fn $name(&mut self, vram_offset: usize, value: $ty) -> Result {
+ let (bar_offset, new_base) =
+ self.compute_window(vram_offset, ::core::mem::size_of::<$ty>())?;
+
+ if let Some(base) = new_base {
+ Self::write_window_base(&self.bar, base);
+ *self.state = base;
+ }
+ self.bar.$name(value, bar_offset)
+ }
+ };
+}
+
+/// PRAMIN aperture manager.
+///
+/// Call [`Pramin::window()`] to acquire exclusive PRAMIN access.
+#[pin_data]
+pub(crate) struct Pramin {
+ bar: Arc<Devres<Bar0>>,
+ /// Valid VRAM region. Accesses outside this range are rejected.
+ vram_region: Range<u64>,
+ /// PRAMIN aperture state, protected by a mutex.
+ ///
+ /// # Safety
+ ///
+ /// This lock is acquired during the DMA fence signaling critical path.
+ /// It must NEVER be held across any reclaimable CPU memory / allocations
+ /// (`GFP_KERNEL`), because the memory reclaim path can call
+ /// `dma_fence_wait()`, which would deadlock with this lock held.
+ #[pin]
+ state: Mutex<u64>,
+}
+
+impl Pramin {
+ /// Create a pin-initializer for PRAMIN.
+ ///
+ /// `vram_region` specifies the valid VRAM address range.
+ pub(crate) fn new(
+ bar: Arc<Devres<Bar0>>,
+ vram_region: Range<u64>,
+ ) -> Result<impl PinInit<Self>> {
+ let bar_access = bar.try_access().ok_or(ENODEV)?;
+ let current_base = Self::read_window_base(&bar_access);
+
+ Ok(pin_init!(Self {
+ bar,
+ vram_region,
+ state <- new_mutex!(current_base, "pramin_state"),
+ }))
+ }
+
+ /// Acquire exclusive PRAMIN access.
+ ///
+ /// Returns a [`PraminWindow`] guard that provides VRAM read/write accessors.
+ /// The [`PraminWindow`] is exclusive and only one can exist at a time.
+ pub(crate) fn window(&self) -> Result<PraminWindow<'_>> {
+ let bar = self.bar.try_access().ok_or(ENODEV)?;
+ let state = self.state.lock();
+ Ok(PraminWindow {
+ bar,
+ vram_region: self.vram_region.clone(),
+ state,
+ })
+ }
+
+ /// Read the current window base from the BAR0_WINDOW register.
+ fn read_window_base(bar: &Bar0) -> u64 {
+ let reg = regs::NV_PBUS_BAR0_WINDOW::read(bar);
+ // TODO: Convert to Bounded<u64, 40> when available.
+ u64::from(reg.window_base()) << 16
+ }
+}
+
+/// PRAMIN window guard for direct VRAM access.
+///
+/// This guard holds exclusive access to the PRAMIN aperture. The window auto-repositions
+/// when accessing VRAM offsets outside the current 1MB range.
+///
+/// Only one [`PraminWindow`] can exist at a time per [`Pramin`] instance (enforced by the
+/// internal `MutexGuard`).
+pub(crate) struct PraminWindow<'a> {
+ bar: RevocableGuard<'a, Bar0>,
+ vram_region: Range<u64>,
+ state: MutexGuard<'a, u64>,
+}
+
+impl PraminWindow<'_> {
+ /// Write a new window base to the BAR0_WINDOW register.
+ fn write_window_base(bar: &Bar0, base: u64) {
+ // CAST: The caller (compute_window) validates that base is within the
+ // VRAM region which is always <= 40 bits. After >> 16, a 40-bit base
+ // becomes 24 bits, which fits in u32.
+ regs::NV_PBUS_BAR0_WINDOW::default()
+ .set_target(Bar0WindowTarget::Vram)
+ .set_window_base((base >> 16) as u32)
+ .write(bar);
+ }
+
+ /// Compute window parameters for a VRAM access.
+ ///
+ /// Returns (`bar_offset`, `new_base`) where:
+ /// - `bar_offset`: The BAR0 offset to use for the access.
+ /// - `new_base`: `Some(base)` if window needs repositioning, `None` otherwise.
+ fn compute_window(
+ &self,
+ vram_offset: usize,
+ access_size: usize,
+ ) -> Result<(usize, Option<u64>)> {
+ // Validate VRAM offset is within the valid VRAM region.
+ let vram_addr = vram_offset as u64;
+ let end_addr = vram_addr.checked_add(access_size as u64).ok_or(EINVAL)?;
+ if vram_addr < self.vram_region.start || end_addr > self.vram_region.end {
+ return Err(EINVAL);
+ }
+
+ // Check if access fits within the current 1MB window.
+ let current_base = *self.state;
+ if vram_addr >= current_base {
+ let offset_in_window: usize = (vram_addr - current_base).into_safe_cast();
+ if offset_in_window + access_size <= PRAMIN_SIZE {
+ return Ok((PRAMIN_BASE + offset_in_window, None));
+ }
+ }
+
+ // Access doesn't fit in current window - reposition.
+ // Hardware requires 64KB alignment for the window base register.
+ let needed_base = vram_addr & !(SZ_64K as u64 - 1);
+ let offset_in_window: usize = (vram_addr - needed_base).into_safe_cast();
+
+ // Verify access fits in the 1MB window from the new base.
+ if offset_in_window + access_size > PRAMIN_SIZE {
+ return Err(EINVAL);
+ }
+
+ Ok((PRAMIN_BASE + offset_in_window, Some(needed_base)))
+ }
+
+ define_pramin_read!(try_read8, u8);
+ define_pramin_read!(try_read16, u16);
+ define_pramin_read!(try_read32, u32);
+ define_pramin_read!(try_read64, u64);
+
+ define_pramin_write!(try_write8, u8);
+ define_pramin_write!(try_write16, u16);
+ define_pramin_write!(try_write32, u32);
+ define_pramin_write!(try_write64, u64);
+}
diff --git a/drivers/gpu/nova-core/nova_core.rs b/drivers/gpu/nova-core/nova_core.rs
index b5caf1044697..c5a78d6388e5 100644
--- a/drivers/gpu/nova-core/nova_core.rs
+++ b/drivers/gpu/nova-core/nova_core.rs
@@ -13,6 +13,7 @@
mod gfw;
mod gpu;
mod gsp;
+mod mm;
mod num;
mod regs;
mod sbuffer;
diff --git a/drivers/gpu/nova-core/regs.rs b/drivers/gpu/nova-core/regs.rs
index ea0d32f5396c..8ec35b8c4b28 100644
--- a/drivers/gpu/nova-core/regs.rs
+++ b/drivers/gpu/nova-core/regs.rs
@@ -32,6 +32,7 @@
Architecture,
Chipset, //
},
+ mm::pramin::Bar0WindowTarget,
num::FromSafeCast,
};
@@ -102,6 +103,11 @@ fn fmt(&self, f: &mut kernel::fmt::Formatter<'_>) -> kernel::fmt::Result {
31:16 frts_err_code as u16;
});
+register!(NV_PBUS_BAR0_WINDOW @ 0x00001700, "BAR0 window control for PRAMIN access" {
+ 25:24 target as u8 ?=> Bar0WindowTarget;
+ 23:0 window_base as u32, "Window base address (bits 39:16 of FB addr)";
+});
+
// PFB
// The following two registers together hold the physical system memory address that is used by the
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 07/23] docs: gpu: nova-core: Document the PRAMIN aperture mechanism
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (5 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 06/23] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 08/23] gpu: nova-core: mm: Add common memory management types Joel Fernandes
` (15 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add documentation for the PRAMIN aperture mechanism used by nova-core
for direct VRAM access.
Nova only uses TARGET=VID_MEM for VRAM access. The SYS_MEM target values
are documented for completeness but not used by the driver.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
Documentation/gpu/nova/core/pramin.rst | 125 +++++++++++++++++++++++++
Documentation/gpu/nova/index.rst | 1 +
2 files changed, 126 insertions(+)
create mode 100644 Documentation/gpu/nova/core/pramin.rst
diff --git a/Documentation/gpu/nova/core/pramin.rst b/Documentation/gpu/nova/core/pramin.rst
new file mode 100644
index 000000000000..55ec9d920629
--- /dev/null
+++ b/Documentation/gpu/nova/core/pramin.rst
@@ -0,0 +1,125 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================
+PRAMIN aperture mechanism
+=========================
+
+.. note::
+ The following description is approximate and current as of the Ampere family.
+ It may change for future generations and is intended to assist in understanding
+ the driver code.
+
+Introduction
+============
+
+PRAMIN is a hardware aperture mechanism that provides CPU access to GPU Video RAM (VRAM) before
+the GPU's Memory Management Unit (MMU) and page tables are initialized. This 1MB sliding window,
+located at a fixed offset within BAR0, is essential for setting up page tables and other critical
+GPU data structures without relying on the GPU's MMU.
+
+Architecture Overview
+=====================
+
+The PRAMIN aperture mechanism is logically implemented by the GPU's PBUS (PCIe Bus Controller Unit)
+and provides a CPU-accessible window into VRAM through the PCIe interface::
+
+ +-----------------+ PCIe +------------------------------+
+ | CPU |<----------->| GPU |
+ +-----------------+ | |
+ | +----------------------+ |
+ | | PBUS | |
+ | | (Bus Controller) | |
+ | | | |
+ | | +--------------+<------------ (window starts at
+ | | | PRAMIN | | | BAR0 + 0x700000)
+ | | | Window | | |
+ | | | (1MB) | | |
+ | | +--------------+ | |
+ | | | | |
+ | +---------|------------+ |
+ | | |
+ | v |
+ | +----------------------+<------------ (Program PRAMIN to any
+ | | VRAM | | 64KB-aligned VRAM boundary)
+ | | (Several GBs) | |
+ | | | |
+ | | FB[0x000000000000] | |
+ | | ... | |
+ | | FB[0x7FFFFFFFFFF] | |
+ | +----------------------+ |
+ +------------------------------+
+
+PBUS (PCIe Bus Controller) is responsible for, among other things, handling MMIO
+accesses to the BAR registers.
+
+PRAMIN Window Operation
+=======================
+
+The PRAMIN window provides a 1MB sliding aperture that can be repositioned over
+the entire VRAM address space using the ``NV_PBUS_BAR0_WINDOW`` register.
+
+Window Control Mechanism
+-------------------------
+
+The window position is controlled via the PBUS ``BAR0_WINDOW`` register::
+
+ NV_PBUS_BAR0_WINDOW Register (0x1700):
+ +-------+--------+--------------------------------------+
+ | 31:26 | 25:24 | 23:0 |
+ | RSVD | TARGET | BASE_ADDR |
+ | | | (bits 39:16 of VRAM address) |
+ +-------+--------+--------------------------------------+
+
+ BASE_ADDR field (bits 23:0):
+ - Contains bits [39:16] of the target VRAM address
+ - Provides 40-bit (1TB) address space coverage
+ - Must be programmed with 64KB-aligned addresses
+
+ TARGET field (bits 25:24):
+ - 0x0: VRAM (Video Memory)
+ - 0x1: SYS_MEM_COH (Coherent System Memory)
+ - 0x2: SYS_MEM_NONCOH (Non-coherent System Memory)
+ - 0x3: Reserved
+
+ .. note::
+ Nova only uses TARGET=VRAM (0x0) for video memory access. The SYS_MEM
+ target values are documented here for hardware completeness but are
+ not used by the driver.
+
+64KB Alignment Requirement
+---------------------------
+
+The PRAMIN window must be aligned to 64KB boundaries in VRAM. This is enforced
+by the ``BASE_ADDR`` field representing bits [39:16] of the target address::
+
+ VRAM Address Calculation:
+ actual_vram_addr = (BASE_ADDR << 16) + pramin_offset
+ Where:
+ - BASE_ADDR: 24-bit value from NV_PBUS_BAR0_WINDOW[23:0]
+ - pramin_offset: 20-bit offset within the PRAMIN window [0x00000-0xFFFFF]
+
+ Example Window Positioning:
+ +---------------------------------------------------------+
+ | VRAM Space |
+ | |
+ | 0x000000000 +-----------------+ <-- 64KB aligned |
+ | | PRAMIN Window | |
+ | | (1MB) | |
+ | 0x0000FFFFF +-----------------+ |
+ | |
+ | | ^ |
+ | | | Window can slide |
+ | v | to any 64KB-aligned boundary |
+ | |
+ | 0x123400000 +-----------------+ <-- 64KB aligned |
+ | | PRAMIN Window | |
+ | | (1MB) | |
+ | 0x1234FFFFF +-----------------+ |
+ | |
+ | ... |
+ | |
+ | 0x7FFFF0000 +-----------------+ <-- 64KB aligned |
+ | | PRAMIN Window | |
+ | | (1MB) | |
+ | 0x7FFFFFFFF +-----------------+ |
+ +---------------------------------------------------------+
diff --git a/Documentation/gpu/nova/index.rst b/Documentation/gpu/nova/index.rst
index e39cb3163581..b8254b1ffe2a 100644
--- a/Documentation/gpu/nova/index.rst
+++ b/Documentation/gpu/nova/index.rst
@@ -32,3 +32,4 @@ vGPU manager VFIO driver and the nova-drm driver.
core/devinit
core/fwsec
core/falcon
+ core/pramin
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 08/23] gpu: nova-core: mm: Add common memory management types
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (6 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 07/23] docs: gpu: nova-core: Document the PRAMIN aperture mechanism Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 09/23] gpu: nova-core: mm: Add TLB flush support Joel Fernandes
` (14 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add foundational types for GPU memory management. These types are used
throughout the nova memory management subsystem for page table
operations, address translation, and memory allocation.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 160 ++++++++++++++++++++++++++++++++++++
1 file changed, 160 insertions(+)
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index 7a5dd4220c67..b2cb245b38b7 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -2,4 +2,164 @@
//! Memory management subsystems for nova-core.
+#![expect(dead_code)]
+
pub(crate) mod pramin;
+
+use kernel::sizes::SZ_4K;
+
+use crate::num::u64_as_usize;
+
+/// Page size in bytes (4 KiB).
+pub(crate) const PAGE_SIZE: usize = SZ_4K;
+
+bitfield! {
+ pub(crate) struct VramAddress(u64), "Physical VRAM address in GPU video memory" {
+ 11:0 offset as u64, "Offset within 4KB page";
+ 63:12 frame_number as u64 => Pfn, "Physical frame number";
+ }
+}
+
+impl VramAddress {
+ /// Create a new VRAM address from a raw value.
+ pub(crate) const fn new(addr: u64) -> Self {
+ Self(addr)
+ }
+
+ /// Get the raw address value as `usize` (useful for MMIO offsets).
+ pub(crate) const fn raw(&self) -> usize {
+ u64_as_usize(self.0)
+ }
+
+ /// Get the raw address value as `u64`.
+ pub(crate) const fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+impl PartialEq for VramAddress {
+ fn eq(&self, other: &Self) -> bool {
+ self.0 == other.0
+ }
+}
+
+impl Eq for VramAddress {}
+
+impl PartialOrd for VramAddress {
+ fn partial_cmp(&self, other: &Self) -> Option<core::cmp::Ordering> {
+ Some(self.cmp(other))
+ }
+}
+
+impl Ord for VramAddress {
+ fn cmp(&self, other: &Self) -> core::cmp::Ordering {
+ self.0.cmp(&other.0)
+ }
+}
+
+impl From<Pfn> for VramAddress {
+ fn from(pfn: Pfn) -> Self {
+ Self::default().set_frame_number(pfn)
+ }
+}
+
+// GPU virtual address.
+bitfield! {
+ pub(crate) struct VirtualAddress(u64), "Virtual address in GPU address space" {
+ 11:0 offset as u64, "Offset within 4KB page";
+ 63:12 frame_number as u64 => Vfn, "Virtual frame number";
+ }
+}
+
+impl VirtualAddress {
+ /// Create a new virtual address from a raw value.
+ #[expect(dead_code)]
+ pub(crate) const fn new(addr: u64) -> Self {
+ Self(addr)
+ }
+
+ /// Get the raw address value as `u64`.
+ pub(crate) const fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+impl From<Vfn> for VirtualAddress {
+ fn from(vfn: Vfn) -> Self {
+ Self::default().set_frame_number(vfn)
+ }
+}
+
+/// Physical Frame Number.
+///
+/// Represents a physical page in VRAM.
+#[repr(transparent)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(crate) struct Pfn(u64);
+
+impl Pfn {
+ /// Create a new PFN from a frame number.
+ pub(crate) const fn new(frame_number: u64) -> Self {
+ Self(frame_number)
+ }
+
+ /// Get the raw frame number.
+ pub(crate) const fn raw(self) -> u64 {
+ self.0
+ }
+}
+
+impl From<VramAddress> for Pfn {
+ fn from(addr: VramAddress) -> Self {
+ addr.frame_number()
+ }
+}
+
+impl From<u64> for Pfn {
+ fn from(val: u64) -> Self {
+ Self(val)
+ }
+}
+
+impl From<Pfn> for u64 {
+ fn from(pfn: Pfn) -> Self {
+ pfn.0
+ }
+}
+
+/// Virtual Frame Number.
+///
+/// Represents a virtual page in GPU address space.
+#[repr(transparent)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(crate) struct Vfn(u64);
+
+impl Vfn {
+ /// Create a new VFN from a frame number.
+ pub(crate) const fn new(frame_number: u64) -> Self {
+ Self(frame_number)
+ }
+
+ /// Get the raw frame number.
+ pub(crate) const fn raw(self) -> u64 {
+ self.0
+ }
+}
+
+impl From<VirtualAddress> for Vfn {
+ fn from(addr: VirtualAddress) -> Self {
+ addr.frame_number()
+ }
+}
+
+impl From<u64> for Vfn {
+ fn from(val: u64) -> Self {
+ Self(val)
+ }
+}
+
+impl From<Vfn> for u64 {
+ fn from(vfn: Vfn) -> Self {
+ vfn.0
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 09/23] gpu: nova-core: mm: Add TLB flush support
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (7 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 08/23] gpu: nova-core: mm: Add common memory management types Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 10/23] gpu: nova-core: mm: Add GpuMm centralized memory manager Joel Fernandes
` (13 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add TLB (Translation Lookaside Buffer) flush support for GPU MMU.
After modifying page table entries, the GPU's TLB must be invalidated
to ensure the new mappings take effect. The Tlb struct provides flush
functionality through BAR0 registers.
The flush operation writes the page directory base address and triggers
an invalidation, polling for completion with a 2 second timeout matching
the Nouveau driver.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/tlb.rs | 90 +++++++++++++++++++++++++++++++++
drivers/gpu/nova-core/regs.rs | 33 ++++++++++++
3 files changed, 124 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/tlb.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index b2cb245b38b7..b02dc265a2c8 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -5,6 +5,7 @@
#![expect(dead_code)]
pub(crate) mod pramin;
+pub(crate) mod tlb;
use kernel::sizes::SZ_4K;
diff --git a/drivers/gpu/nova-core/mm/tlb.rs b/drivers/gpu/nova-core/mm/tlb.rs
new file mode 100644
index 000000000000..23458395511d
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/tlb.rs
@@ -0,0 +1,90 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! TLB (Translation Lookaside Buffer) flush support for GPU MMU.
+//!
+//! After modifying page table entries, the GPU's TLB must be flushed to
+//! ensure the new mappings take effect. This module provides TLB flush
+//! functionality for virtual memory managers.
+//!
+//! # Example
+//!
+//! ```ignore
+//! use crate::mm::tlb::Tlb;
+//!
+//! fn page_table_update(tlb: &Tlb, pdb_addr: VramAddress) -> Result<()> {
+//! // ... modify page tables ...
+//!
+//! // Flush TLB to make changes visible (polls for completion).
+//! tlb.flush(pdb_addr)?;
+//!
+//! Ok(())
+//! }
+//! ```
+
+use kernel::{
+ devres::Devres,
+ io::poll::read_poll_timeout,
+ new_mutex,
+ prelude::*,
+ sync::{Arc, Mutex},
+ time::Delta, //
+};
+
+use crate::{
+ driver::Bar0,
+ mm::VramAddress,
+ regs, //
+};
+
+/// TLB manager for GPU translation buffer operations.
+#[pin_data]
+pub(crate) struct Tlb {
+ bar: Arc<Devres<Bar0>>,
+ /// TLB flush serialization lock: This lock is acquired during the
+ /// DMA fence signalling critical path. It must NEVER be held across any
+ /// reclaimable CPU memory allocations because the memory reclaim path can
+ /// call `dma_fence_wait()`, which would deadlock with this lock held.
+ #[pin]
+ lock: Mutex<()>,
+}
+
+impl Tlb {
+ /// Create a new TLB manager.
+ pub(super) fn new(bar: Arc<Devres<Bar0>>) -> impl PinInit<Self> {
+ pin_init!(Self {
+ bar,
+ lock <- new_mutex!((), "tlb_flush"),
+ })
+ }
+
+ /// Flush the GPU TLB for a specific page directory base.
+ ///
+ /// This invalidates all TLB entries associated with the given PDB address.
+ /// Must be called after modifying page table entries to ensure the GPU sees
+ /// the updated mappings.
+ pub(crate) fn flush(&self, pdb_addr: VramAddress) -> Result {
+ let _guard = self.lock.lock();
+
+ let bar = self.bar.try_access().ok_or(ENODEV)?;
+
+ // Write PDB address.
+ regs::NV_TLB_FLUSH_PDB_LO::from_pdb_addr(pdb_addr.raw_u64()).write(&*bar);
+ regs::NV_TLB_FLUSH_PDB_HI::from_pdb_addr(pdb_addr.raw_u64()).write(&*bar);
+
+ // Trigger flush: invalidate all pages and enable.
+ regs::NV_TLB_FLUSH_CTRL::default()
+ .set_page_all(true)
+ .set_enable(true)
+ .write(&*bar);
+
+ // Poll for completion - enable bit clears when flush is done.
+ read_poll_timeout(
+ || Ok(regs::NV_TLB_FLUSH_CTRL::read(&*bar)),
+ |ctrl| !ctrl.enable(),
+ Delta::ZERO,
+ Delta::from_secs(2),
+ )?;
+
+ Ok(())
+ }
+}
diff --git a/drivers/gpu/nova-core/regs.rs b/drivers/gpu/nova-core/regs.rs
index 8ec35b8c4b28..ff6faa9a7c5c 100644
--- a/drivers/gpu/nova-core/regs.rs
+++ b/drivers/gpu/nova-core/regs.rs
@@ -455,3 +455,36 @@ pub(crate) mod ga100 {
0:0 display_disabled as bool;
});
}
+
+// MMU TLB
+
+register!(NV_TLB_FLUSH_PDB_LO @ 0x00b830a0, "TLB flush register: PDB address bits [39:8]" {
+ 31:0 pdb_lo as u32, "PDB address bits [39:8]";
+});
+
+impl NV_TLB_FLUSH_PDB_LO {
+ /// Create a register value from a PDB address.
+ ///
+ /// Extracts bits [39:8] of the address and shifts it right by 8 bits.
+ pub(crate) fn from_pdb_addr(addr: u64) -> Self {
+ Self::default().set_pdb_lo(((addr >> 8) & 0xFFFF_FFFF) as u32)
+ }
+}
+
+register!(NV_TLB_FLUSH_PDB_HI @ 0x00b830a4, "TLB flush register: PDB address bits [47:40]" {
+ 7:0 pdb_hi as u8, "PDB address bits [47:40]";
+});
+
+impl NV_TLB_FLUSH_PDB_HI {
+ /// Create a register value from a PDB address.
+ ///
+ /// Extracts bits [47:40] of the address and shifts it right by 40 bits.
+ pub(crate) fn from_pdb_addr(addr: u64) -> Self {
+ Self::default().set_pdb_hi(((addr >> 40) & 0xFF) as u8)
+ }
+}
+
+register!(NV_TLB_FLUSH_CTRL @ 0x00b830b0, "TLB flush control register" {
+ 0:0 page_all as bool, "Invalidate all pages";
+ 31:31 enable as bool, "Enable/trigger flush (clears when flush completes)";
+});
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 10/23] gpu: nova-core: mm: Add GpuMm centralized memory manager
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (8 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 09/23] gpu: nova-core: mm: Add TLB flush support Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 11/23] gpu: nova-core: mm: Add common types for all page table formats Joel Fernandes
` (12 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Introduce GpuMm as the centralized GPU memory manager that owns:
- Buddy allocator for VRAM allocation.
- PRAMIN window for direct VRAM access.
- TLB manager for translation buffer operations.
This provides clean ownership model where GpuMm provides accessor
methods for its components that can be used for memory management
operations.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gpu.rs | 32 +++++++++++-
drivers/gpu/nova-core/gsp/commands.rs | 2 -
drivers/gpu/nova-core/mm.rs | 70 ++++++++++++++++++++++++++-
3 files changed, 99 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index c324d96bd0c6..32266480bb0f 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -4,8 +4,10 @@
device,
devres::Devres,
fmt,
+ gpu::buddy::GpuBuddyParams,
pci,
prelude::*,
+ sizes::SZ_4K,
sync::Arc, //
};
@@ -22,6 +24,7 @@
commands::GetGspStaticInfoReply,
Gsp, //
},
+ mm::GpuMm,
regs,
};
@@ -252,6 +255,9 @@ pub(crate) struct Gpu {
gsp_falcon: Falcon<GspFalcon>,
/// SEC2 falcon instance, used for GSP boot up and cleanup.
sec2_falcon: Falcon<Sec2Falcon>,
+ /// GPU memory manager owning memory management resources.
+ #[pin]
+ mm: GpuMm,
/// GSP runtime data. Temporarily an empty placeholder.
#[pin]
gsp: Gsp,
@@ -288,7 +294,31 @@ pub(crate) fn new<'a>(
gsp <- Gsp::new(pdev),
- gsp_static_info: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)? },
+ gsp_static_info: {
+ let info = gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)?;
+
+ dev_info!(
+ pdev.as_ref(),
+ "Using FB region: {:#x}..{:#x}\n",
+ info.usable_fb_region.start,
+ info.usable_fb_region.end
+ );
+
+ info
+ },
+
+ // Create GPU memory manager owning memory management resources.
+ mm <- {
+ let usable_vram = &gsp_static_info.usable_fb_region;
+ // PRAMIN covers all physical VRAM (including GSP-reserved areas
+ // above the usable region, e.g. the BAR1 page directory).
+ let pramin_vram_region = 0..gsp_static_info.total_fb_end;
+ GpuMm::new(devres_bar.clone(), GpuBuddyParams {
+ base_offset: usable_vram.start,
+ physical_memory_size: usable_vram.end - usable_vram.start,
+ chunk_size: SZ_4K,
+ }, pramin_vram_region)?
+ },
bar: devres_bar,
})
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index 389d215098c6..18dd86a38d46 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -191,10 +191,8 @@ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
pub(crate) struct GetGspStaticInfoReply {
gpu_name: [u8; 64],
/// Usable FB (VRAM) region for driver memory allocation.
- #[expect(dead_code)]
pub(crate) usable_fb_region: Range<u64>,
/// End of VRAM.
- #[expect(dead_code)]
pub(crate) total_fb_end: u64,
}
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index b02dc265a2c8..dd15175c841d 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -7,9 +7,75 @@
pub(crate) mod pramin;
pub(crate) mod tlb;
-use kernel::sizes::SZ_4K;
+use kernel::{
+ devres::Devres,
+ gpu::buddy::{
+ GpuBuddy,
+ GpuBuddyParams, //
+ },
+ prelude::*,
+ sizes::SZ_4K,
+ sync::Arc, //
+};
-use crate::num::u64_as_usize;
+use crate::{
+ driver::Bar0,
+ num::u64_as_usize, //
+};
+
+pub(crate) use tlb::Tlb;
+
+/// GPU Memory Manager - owns all core MM components.
+///
+/// Provides centralized ownership of memory management resources:
+/// - [`GpuBuddy`] allocator for VRAM page table allocation.
+/// - [`pramin::Pramin`] for direct VRAM access.
+/// - [`Tlb`] manager for translation buffer flush operations.
+#[pin_data]
+pub(crate) struct GpuMm {
+ buddy: GpuBuddy,
+ #[pin]
+ pramin: pramin::Pramin,
+ #[pin]
+ tlb: Tlb,
+}
+
+impl GpuMm {
+ /// Create a pin-initializer for `GpuMm`.
+ ///
+ /// `pramin_vram_region` is the full physical VRAM range (including GSP-reserved
+ /// areas). PRAMIN window accesses are validated against this range.
+ pub(crate) fn new(
+ bar: Arc<Devres<Bar0>>,
+ buddy_params: GpuBuddyParams,
+ pramin_vram_region: core::ops::Range<u64>,
+ ) -> Result<impl PinInit<Self>> {
+ let buddy = GpuBuddy::new(buddy_params)?;
+ let tlb_init = Tlb::new(bar.clone());
+ let pramin_init = pramin::Pramin::new(bar, pramin_vram_region)?;
+
+ Ok(pin_init!(Self {
+ buddy,
+ pramin <- pramin_init,
+ tlb <- tlb_init,
+ }))
+ }
+
+ /// Access the [`GpuBuddy`] allocator.
+ pub(crate) fn buddy(&self) -> &GpuBuddy {
+ &self.buddy
+ }
+
+ /// Access the [`pramin::Pramin`].
+ pub(crate) fn pramin(&self) -> &pramin::Pramin {
+ &self.pramin
+ }
+
+ /// Access the [`Tlb`] manager.
+ pub(crate) fn tlb(&self) -> &Tlb {
+ &self.tlb
+ }
+}
/// Page size in bytes (4 KiB).
pub(crate) const PAGE_SIZE: usize = SZ_4K;
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 11/23] gpu: nova-core: mm: Add common types for all page table formats
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (9 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 10/23] gpu: nova-core: mm: Add GpuMm centralized memory manager Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 12/23] gpu: nova-core: mm: Add MMU v2 page table types Joel Fernandes
` (11 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add common page table types shared between MMU v2 and v3. These types
are hardware-agnostic and used by both MMU versions.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/pagetable.rs | 155 ++++++++++++++++++++++++++
2 files changed, 156 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/pagetable.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index dd15175c841d..b0aad90e94bc 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -4,6 +4,7 @@
#![expect(dead_code)]
+pub(crate) mod pagetable;
pub(crate) mod pramin;
pub(crate) mod tlb;
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
new file mode 100644
index 000000000000..3946ce2f50a5
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -0,0 +1,155 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Common page table types shared between MMU v2 and v3.
+//!
+//! This module provides foundational types used by both MMU versions:
+//! - Page table level hierarchy
+//! - Memory aperture types for PDEs and PTEs
+
+#![expect(dead_code)]
+
+use crate::gpu::Architecture;
+
+/// Extracts the page table index at a given level from a virtual address.
+pub(crate) trait VaLevelIndex {
+ /// Return the page table index at `level` for this virtual address.
+ fn level_index(&self, level: u64) -> u64;
+}
+
+/// MMU version enumeration.
+#[derive(Debug, Clone, Copy, PartialEq, Eq)]
+pub(crate) enum MmuVersion {
+ /// MMU v2 for Turing/Ampere/Ada.
+ V2,
+ /// MMU v3 for Hopper and later.
+ V3,
+}
+
+impl From<Architecture> for MmuVersion {
+ fn from(arch: Architecture) -> Self {
+ match arch {
+ Architecture::Turing | Architecture::Ampere | Architecture::Ada => Self::V2,
+ // In the future, uncomment:
+ // _ => Self::V3,
+ }
+ }
+}
+
+/// Page Table Level hierarchy for MMU v2/v3.
+#[derive(Debug, Clone, Copy, PartialEq, Eq)]
+pub(crate) enum PageTableLevel {
+ /// Level 0 - Page Directory Base (root).
+ Pdb,
+ /// Level 1 - Intermediate page directory.
+ L1,
+ /// Level 2 - Intermediate page directory.
+ L2,
+ /// Level 3 - Intermediate page directory or dual PDE (version-dependent).
+ L3,
+ /// Level 4 - PTE level for v2, intermediate page directory for v3.
+ L4,
+ /// Level 5 - PTE level used for MMU v3 only.
+ L5,
+}
+
+impl PageTableLevel {
+ /// Number of entries per page table (512 for 4KB pages).
+ pub(crate) const ENTRIES_PER_TABLE: usize = 512;
+
+ /// Get the next level in the hierarchy.
+ pub(crate) const fn next(&self) -> Option<PageTableLevel> {
+ match self {
+ Self::Pdb => Some(Self::L1),
+ Self::L1 => Some(Self::L2),
+ Self::L2 => Some(Self::L3),
+ Self::L3 => Some(Self::L4),
+ Self::L4 => Some(Self::L5),
+ Self::L5 => None,
+ }
+ }
+
+ /// Convert level to index.
+ pub(crate) const fn as_index(&self) -> u64 {
+ match self {
+ Self::Pdb => 0,
+ Self::L1 => 1,
+ Self::L2 => 2,
+ Self::L3 => 3,
+ Self::L4 => 4,
+ Self::L5 => 5,
+ }
+ }
+}
+
+/// Memory aperture for Page Table Entries (`PTE`s).
+///
+/// Determines which memory region the `PTE` points to.
+#[repr(u8)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(crate) enum AperturePte {
+ /// Local video memory (VRAM).
+ #[default]
+ VideoMemory = 0,
+ /// Peer GPU's video memory.
+ PeerMemory = 1,
+ /// System memory with cache coherence.
+ SystemCoherent = 2,
+ /// System memory without cache coherence.
+ SystemNonCoherent = 3,
+}
+
+// TODO[FPRI]: Replace with `#[derive(FromPrimitive)]` when available.
+impl From<u8> for AperturePte {
+ fn from(val: u8) -> Self {
+ match val {
+ 0 => Self::VideoMemory,
+ 1 => Self::PeerMemory,
+ 2 => Self::SystemCoherent,
+ 3 => Self::SystemNonCoherent,
+ _ => Self::VideoMemory,
+ }
+ }
+}
+
+// TODO[FPRI]: Replace with `#[derive(ToPrimitive)]` when available.
+impl From<AperturePte> for u8 {
+ fn from(val: AperturePte) -> Self {
+ val as u8
+ }
+}
+
+/// Memory aperture for Page Directory Entries (`PDE`s).
+///
+/// Note: For `PDE`s, `Invalid` (0) means the entry is not valid.
+#[repr(u8)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(crate) enum AperturePde {
+ /// Invalid/unused entry.
+ #[default]
+ Invalid = 0,
+ /// Page table is in video memory.
+ VideoMemory = 1,
+ /// Page table is in system memory with coherence.
+ SystemCoherent = 2,
+ /// Page table is in system memory without coherence.
+ SystemNonCoherent = 3,
+}
+
+// TODO[FPRI]: Replace with `#[derive(FromPrimitive)]` when available.
+impl From<u8> for AperturePde {
+ fn from(val: u8) -> Self {
+ match val {
+ 1 => Self::VideoMemory,
+ 2 => Self::SystemCoherent,
+ 3 => Self::SystemNonCoherent,
+ _ => Self::Invalid,
+ }
+ }
+}
+
+// TODO[FPRI]: Replace with `#[derive(ToPrimitive)]` when available.
+impl From<AperturePde> for u8 {
+ fn from(val: AperturePde) -> Self {
+ val as u8
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 12/23] gpu: nova-core: mm: Add MMU v2 page table types
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (10 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 11/23] gpu: nova-core: mm: Add common types for all page table formats Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 13/23] gpu: nova-core: mm: Add MMU v3 " Joel Fernandes
` (10 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add page table entry and directory structures for MMU version 2
used by Turing/Ampere/Ada GPUs.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 2 +
drivers/gpu/nova-core/mm/pagetable/ver2.rs | 232 +++++++++++++++++++++
2 files changed, 234 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/pagetable/ver2.rs
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index 3946ce2f50a5..2dcd559cc692 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -8,6 +8,8 @@
#![expect(dead_code)]
+pub(crate) mod ver2;
+
use crate::gpu::Architecture;
/// Extracts the page table index at a given level from a virtual address.
diff --git a/drivers/gpu/nova-core/mm/pagetable/ver2.rs b/drivers/gpu/nova-core/mm/pagetable/ver2.rs
new file mode 100644
index 000000000000..6e617846c57b
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable/ver2.rs
@@ -0,0 +1,232 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! MMU v2 page table types for Turing and Ampere GPUs.
+//!
+//! This module defines MMU version 2 specific types (Turing, Ampere and Ada GPUs).
+//!
+//! Bit field layouts derived from the NVIDIA OpenRM documentation:
+//! `open-gpu-kernel-modules/src/common/inc/swref/published/turing/tu102/dev_mmu.h`
+
+#![expect(dead_code)]
+
+use super::{
+ AperturePde,
+ AperturePte,
+ PageTableLevel,
+ VaLevelIndex, //
+};
+use crate::mm::{
+ Pfn,
+ VirtualAddress,
+ VramAddress, //
+};
+
+bitfield! {
+ pub(crate) struct VirtualAddressV2(u64), "MMU v2 49-bit virtual address layout" {
+ 11:0 offset as u64, "Page offset [11:0]";
+ 20:12 pt_idx as u64, "PT index [20:12]";
+ 28:21 pde0_idx as u64, "PDE0 index [28:21]";
+ 37:29 pde1_idx as u64, "PDE1 index [37:29]";
+ 46:38 pde2_idx as u64, "PDE2 index [46:38]";
+ 48:47 pde3_idx as u64, "PDE3 index [48:47]";
+ }
+}
+
+impl VirtualAddressV2 {
+ /// Create a [`VirtualAddressV2`] from a [`VirtualAddress`].
+ pub(crate) fn new(va: VirtualAddress) -> Self {
+ Self(va.raw_u64())
+ }
+}
+
+impl VaLevelIndex for VirtualAddressV2 {
+ fn level_index(&self, level: u64) -> u64 {
+ match level {
+ 0 => self.pde3_idx(),
+ 1 => self.pde2_idx(),
+ 2 => self.pde1_idx(),
+ 3 => self.pde0_idx(),
+ 4 => self.pt_idx(),
+ _ => 0,
+ }
+ }
+}
+
+/// PDE levels for MMU v2 (5-level hierarchy: PDB -> L1 -> L2 -> L3 -> L4).
+pub(crate) const PDE_LEVELS: &[PageTableLevel] = &[
+ PageTableLevel::Pdb,
+ PageTableLevel::L1,
+ PageTableLevel::L2,
+ PageTableLevel::L3,
+];
+
+/// PTE level for MMU v2.
+pub(crate) const PTE_LEVEL: PageTableLevel = PageTableLevel::L4;
+
+/// Dual PDE level for MMU v2 (128-bit entries).
+pub(crate) const DUAL_PDE_LEVEL: PageTableLevel = PageTableLevel::L3;
+
+// Page Table Entry (PTE) for MMU v2 - 64-bit entry at level 4.
+bitfield! {
+ pub(crate) struct Pte(u64), "Page Table Entry for MMU v2" {
+ 0:0 valid as bool, "Entry is valid";
+ 2:1 aperture as u8 => AperturePte, "Memory aperture type";
+ 3:3 volatile as bool, "Volatile (bypass L2 cache)";
+ 4:4 encrypted as bool, "Encryption enabled (Confidential Computing)";
+ 5:5 privilege as bool, "Privileged access only";
+ 6:6 read_only as bool, "Write protection";
+ 7:7 atomic_disable as bool, "Atomic operations disabled";
+ 53:8 frame_number_sys as u64 => Pfn, "Frame number for system memory";
+ 32:8 frame_number_vid as u64 => Pfn, "Frame number for video memory";
+ 35:33 peer_id as u8, "Peer GPU ID for peer memory (0-7)";
+ 53:36 comptagline as u32, "Compression tag line bits";
+ 63:56 kind as u8, "Surface kind/format";
+ }
+}
+
+impl Pte {
+ /// Create a PTE from a `u64` value.
+ pub(crate) fn new(val: u64) -> Self {
+ Self(val)
+ }
+
+ /// Create a valid PTE for video memory.
+ pub(crate) fn new_vram(pfn: Pfn, writable: bool) -> Self {
+ Self::default()
+ .set_valid(true)
+ .set_aperture(AperturePte::VideoMemory)
+ .set_frame_number_vid(pfn)
+ .set_read_only(!writable)
+ }
+
+ /// Create an invalid PTE.
+ pub(crate) fn invalid() -> Self {
+ Self::default()
+ }
+
+ /// Get the frame number based on aperture type.
+ pub(crate) fn frame_number(&self) -> Pfn {
+ match self.aperture() {
+ AperturePte::VideoMemory => self.frame_number_vid(),
+ _ => self.frame_number_sys(),
+ }
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+// Page Directory Entry (PDE) for MMU v2 - 64-bit entry at levels 0-2.
+bitfield! {
+ pub(crate) struct Pde(u64), "Page Directory Entry for MMU v2" {
+ 0:0 valid_inverted as bool, "Valid bit (inverted logic)";
+ 2:1 aperture as u8 => AperturePde, "Memory aperture type";
+ 3:3 volatile as bool, "Volatile (bypass L2 cache)";
+ 5:5 no_ats as bool, "Disable Address Translation Services";
+ 53:8 table_frame_sys as u64 => Pfn, "Table frame number for system memory";
+ 32:8 table_frame_vid as u64 => Pfn, "Table frame number for video memory";
+ 35:33 peer_id as u8, "Peer GPU ID (0-7)";
+ }
+}
+
+impl Pde {
+ /// Create a PDE from a `u64` value.
+ pub(crate) fn new(val: u64) -> Self {
+ Self(val)
+ }
+
+ /// Create a valid PDE pointing to a page table in video memory.
+ pub(crate) fn new_vram(table_pfn: Pfn) -> Self {
+ Self::default()
+ .set_valid_inverted(false) // 0 = valid
+ .set_aperture(AperturePde::VideoMemory)
+ .set_table_frame_vid(table_pfn)
+ }
+
+ /// Create an invalid PDE.
+ pub(crate) fn invalid() -> Self {
+ Self::default()
+ .set_valid_inverted(true)
+ .set_aperture(AperturePde::Invalid)
+ }
+
+ /// Check if this PDE is valid.
+ pub(crate) fn is_valid(&self) -> bool {
+ !self.valid_inverted() && self.aperture() != AperturePde::Invalid
+ }
+
+ /// Get the table frame number based on aperture type.
+ pub(crate) fn table_frame(&self) -> Pfn {
+ match self.aperture() {
+ AperturePde::VideoMemory => self.table_frame_vid(),
+ _ => self.table_frame_sys(),
+ }
+ }
+
+ /// Get the VRAM address of the page table.
+ pub(crate) fn table_vram_address(&self) -> VramAddress {
+ debug_assert!(
+ self.aperture() == AperturePde::VideoMemory,
+ "table_vram_address called on non-VRAM PDE (aperture: {:?})",
+ self.aperture()
+ );
+ VramAddress::from(self.table_frame_vid())
+ }
+
+ /// Get the raw `u64` value of the PDE.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+/// Dual PDE at Level 3 - 128-bit entry of Large/Small Page Table pointers.
+///
+/// The dual PDE supports both large (64KB) and small (4KB) page tables.
+#[repr(C)]
+#[derive(Debug, Clone, Copy, Default)]
+pub(crate) struct DualPde {
+ /// Large/Big Page Table pointer (lower 64 bits).
+ pub(crate) big: Pde,
+ /// Small Page Table pointer (upper 64 bits).
+ pub(crate) small: Pde,
+}
+
+impl DualPde {
+ /// Create a dual PDE from raw 128-bit value (two `u64`s).
+ pub(crate) fn new(big: u64, small: u64) -> Self {
+ Self {
+ big: Pde::new(big),
+ small: Pde::new(small),
+ }
+ }
+
+ /// Create a dual PDE with only the small page table pointer set.
+ ///
+ /// Note: The big (LPT) portion is set to 0, not `Pde::invalid()`.
+ /// According to hardware documentation, clearing bit 0 of the 128-bit
+ /// entry makes the PDE behave as a "normal" PDE. Using `Pde::invalid()`
+ /// would set bit 0 (valid_inverted), which breaks page table walking.
+ pub(crate) fn new_small(table_pfn: Pfn) -> Self {
+ Self {
+ big: Pde::new(0),
+ small: Pde::new_vram(table_pfn),
+ }
+ }
+
+ /// Check if the small page table pointer is valid.
+ pub(crate) fn has_small(&self) -> bool {
+ self.small.is_valid()
+ }
+
+ /// Check if the big page table pointer is valid.
+ pub(crate) fn has_big(&self) -> bool {
+ self.big.is_valid()
+ }
+
+ /// Get the small page table PFN.
+ pub(crate) fn small_pfn(&self) -> Pfn {
+ self.small.table_frame()
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 13/23] gpu: nova-core: mm: Add MMU v3 page table types
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (11 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 12/23] gpu: nova-core: mm: Add MMU v2 page table types Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 14/23] gpu: nova-core: mm: Add unified page table entry wrapper enums Joel Fernandes
` (9 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add page table entry and directory structures for MMU version 3
used by Hopper and later GPUs.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 1 +
drivers/gpu/nova-core/mm/pagetable/ver3.rs | 337 +++++++++++++++++++++
2 files changed, 338 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/pagetable/ver3.rs
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index 2dcd559cc692..5c6ae78506af 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -9,6 +9,7 @@
#![expect(dead_code)]
pub(crate) mod ver2;
+pub(crate) mod ver3;
use crate::gpu::Architecture;
diff --git a/drivers/gpu/nova-core/mm/pagetable/ver3.rs b/drivers/gpu/nova-core/mm/pagetable/ver3.rs
new file mode 100644
index 000000000000..64b47561ebf9
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable/ver3.rs
@@ -0,0 +1,337 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! MMU v3 page table types for Hopper and later GPUs.
+//!
+//! This module defines MMU version 3 specific types (Hopper and later GPUs).
+//!
+//! Key differences from MMU v2:
+//! - Unified 40-bit address field for all apertures (v2 had separate sys/vid fields).
+//! - PCF (Page Classification Field) replaces separate privilege/RO/atomic/cache bits.
+//! - KIND field is 4 bits (not 8).
+//! - IS_PTE bit in PDE to support large pages directly.
+//! - No COMPTAGLINE field (compression handled differently in v3).
+//! - No separate ENCRYPTED bit.
+//!
+//! Bit field layouts derived from the NVIDIA OpenRM documentation:
+//! `open-gpu-kernel-modules/src/common/inc/swref/published/hopper/gh100/dev_mmu.h`
+
+#![expect(dead_code)]
+
+use super::{
+ AperturePde,
+ AperturePte,
+ PageTableLevel,
+ VaLevelIndex, //
+};
+use crate::mm::{
+ Pfn,
+ VirtualAddress,
+ VramAddress, //
+};
+use kernel::prelude::*;
+
+bitfield! {
+ pub(crate) struct VirtualAddressV3(u64), "MMU v3 57-bit virtual address layout" {
+ 11:0 offset as u64, "Page offset [11:0]";
+ 20:12 pt_idx as u64, "PT index [20:12]";
+ 28:21 pde0_idx as u64, "PDE0 index [28:21]";
+ 37:29 pde1_idx as u64, "PDE1 index [37:29]";
+ 46:38 pde2_idx as u64, "PDE2 index [46:38]";
+ 55:47 pde3_idx as u64, "PDE3 index [55:47]";
+ 56:56 pde4_idx as u64, "PDE4 index [56]";
+ }
+}
+
+impl VirtualAddressV3 {
+ /// Create a [`VirtualAddressV3`] from a [`VirtualAddress`].
+ pub(crate) fn new(va: VirtualAddress) -> Self {
+ Self(va.raw_u64())
+ }
+}
+
+impl VaLevelIndex for VirtualAddressV3 {
+ fn level_index(&self, level: u64) -> u64 {
+ match level {
+ 0 => self.pde4_idx(),
+ 1 => self.pde3_idx(),
+ 2 => self.pde2_idx(),
+ 3 => self.pde1_idx(),
+ 4 => self.pde0_idx(),
+ 5 => self.pt_idx(),
+ _ => 0,
+ }
+ }
+}
+
+/// PDE levels for MMU v3 (6-level hierarchy).
+pub(crate) const PDE_LEVELS: &[PageTableLevel] = &[
+ PageTableLevel::Pdb,
+ PageTableLevel::L1,
+ PageTableLevel::L2,
+ PageTableLevel::L3,
+ PageTableLevel::L4,
+];
+
+/// PTE level for MMU v3.
+pub(crate) const PTE_LEVEL: PageTableLevel = PageTableLevel::L5;
+
+/// Dual PDE level for MMU v3 (128-bit entries).
+pub(crate) const DUAL_PDE_LEVEL: PageTableLevel = PageTableLevel::L4;
+
+// Page Classification Field (PCF) - 5 bits for PTEs in MMU v3.
+bitfield! {
+ pub(crate) struct PtePcf(u8), "Page Classification Field for PTEs" {
+ 0:0 uncached as bool, "Bypass L2 cache (0=cached, 1=bypass)";
+ 1:1 acd as bool, "Access counting disabled (0=enabled, 1=disabled)";
+ 2:2 read_only as bool, "Read-only access (0=read-write, 1=read-only)";
+ 3:3 no_atomic as bool, "Atomics disabled (0=enabled, 1=disabled)";
+ 4:4 privileged as bool, "Privileged access only (0=regular, 1=privileged)";
+ }
+}
+
+impl PtePcf {
+ /// Create PCF for read-write mapping (cached, no atomics, regular mode).
+ pub(crate) fn rw() -> Self {
+ Self::default().set_no_atomic(true)
+ }
+
+ /// Create PCF for read-only mapping (cached, no atomics, regular mode).
+ pub(crate) fn ro() -> Self {
+ Self::default().set_read_only(true).set_no_atomic(true)
+ }
+
+ /// Get the raw `u8` value.
+ pub(crate) fn raw_u8(&self) -> u8 {
+ self.0
+ }
+}
+
+impl From<u8> for PtePcf {
+ fn from(val: u8) -> Self {
+ Self(val)
+ }
+}
+
+// Page Classification Field (PCF) - 3 bits for PDEs in MMU v3.
+// Controls Address Translation Services (ATS) and caching.
+bitfield! {
+ pub(crate) struct PdePcf(u8), "Page Classification Field for PDEs" {
+ 0:0 uncached as bool, "Bypass L2 cache (0=cached, 1=bypass)";
+ 1:1 no_ats as bool, "Address Translation Services disabled (0=enabled, 1=disabled)";
+ }
+}
+
+impl PdePcf {
+ /// Create PCF for cached mapping with ATS enabled (default).
+ pub(crate) fn cached() -> Self {
+ Self::default()
+ }
+
+ /// Get the raw `u8` value.
+ pub(crate) fn raw_u8(&self) -> u8 {
+ self.0
+ }
+}
+
+impl From<u8> for PdePcf {
+ fn from(val: u8) -> Self {
+ Self(val)
+ }
+}
+
+// Page Table Entry (PTE) for MMU v3.
+bitfield! {
+ pub(crate) struct Pte(u64), "Page Table Entry for MMU v3" {
+ 0:0 valid as bool, "Entry is valid";
+ 2:1 aperture as u8 => AperturePte, "Memory aperture type";
+ 7:3 pcf as u8 => PtePcf, "Page Classification Field";
+ 11:8 kind as u8, "Surface kind (4 bits, 0x0=pitch, 0xF=invalid)";
+ 51:12 frame_number as u64 => Pfn, "Physical frame number (for all apertures)";
+ 63:61 peer_id as u8, "Peer GPU ID for peer memory (0-7)";
+ }
+}
+
+impl Pte {
+ /// Create a PTE from a `u64` value.
+ pub(crate) fn new(val: u64) -> Self {
+ Self(val)
+ }
+
+ /// Create a valid PTE for video memory.
+ pub(crate) fn new_vram(frame: Pfn, writable: bool) -> Self {
+ let pcf = if writable { PtePcf::rw() } else { PtePcf::ro() };
+ Self::default()
+ .set_valid(true)
+ .set_aperture(AperturePte::VideoMemory)
+ .set_pcf(pcf)
+ .set_frame_number(frame)
+ }
+
+ /// Create an invalid PTE.
+ pub(crate) fn invalid() -> Self {
+ Self::default()
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+// Page Directory Entry (PDE) for MMU v3.
+//
+// Note: v3 uses a unified 40-bit address field (v2 had separate sys/vid address fields).
+bitfield! {
+ pub(crate) struct Pde(u64), "Page Directory Entry for MMU v3 (Hopper+)" {
+ 0:0 is_pte as bool, "Entry is a PTE (0=PDE, 1=large page PTE)";
+ 2:1 aperture as u8 => AperturePde, "Memory aperture (0=invalid, 1=vidmem, 2=coherent, 3=non-coherent)";
+ 5:3 pcf as u8 => PdePcf, "Page Classification Field (3 bits for PDE)";
+ 51:12 table_frame as u64 => Pfn, "Table frame number (40-bit unified address)";
+ }
+}
+
+impl Pde {
+ /// Create a PDE from a `u64` value.
+ pub(crate) fn new(val: u64) -> Self {
+ Self(val)
+ }
+
+ /// Create a valid PDE pointing to a page table in video memory.
+ pub(crate) fn new_vram(table_pfn: Pfn) -> Self {
+ Self::default()
+ .set_is_pte(false)
+ .set_aperture(AperturePde::VideoMemory)
+ .set_table_frame(table_pfn)
+ }
+
+ /// Create an invalid PDE.
+ pub(crate) fn invalid() -> Self {
+ Self::default().set_aperture(AperturePde::Invalid)
+ }
+
+ /// Check if this PDE is valid.
+ pub(crate) fn is_valid(&self) -> bool {
+ self.aperture() != AperturePde::Invalid
+ }
+
+ /// Get the VRAM address of the page table.
+ pub(crate) fn table_vram_address(&self) -> VramAddress {
+ debug_assert!(
+ self.aperture() == AperturePde::VideoMemory,
+ "table_vram_address called on non-VRAM PDE (aperture: {:?})",
+ self.aperture()
+ );
+ VramAddress::from(self.table_frame())
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+// Big Page Table pointer for Dual PDE - 64-bit lower word of the 128-bit Dual PDE.
+bitfield! {
+ pub(crate) struct DualPdeBig(u64), "Big Page Table pointer in Dual PDE (MMU v3)" {
+ 0:0 is_pte as bool, "Entry is a PTE (for large pages)";
+ 2:1 aperture as u8 => AperturePde, "Memory aperture type";
+ 5:3 pcf as u8 => PdePcf, "Page Classification Field";
+ 51:8 table_frame as u64, "Table frame (table address 256-byte aligned)";
+ }
+}
+
+impl DualPdeBig {
+ /// Create a big page table pointer from a `u64` value.
+ pub(crate) fn new(val: u64) -> Self {
+ Self(val)
+ }
+
+ /// Create an invalid big page table pointer.
+ pub(crate) fn invalid() -> Self {
+ Self::default().set_aperture(AperturePde::Invalid)
+ }
+
+ /// Create a valid big PDE pointing to a page table in video memory.
+ pub(crate) fn new_vram(table_addr: VramAddress) -> Result<Self> {
+ // Big page table addresses must be 256-byte aligned (shift 8).
+ if table_addr.raw_u64() & 0xFF != 0 {
+ return Err(EINVAL);
+ }
+
+ let table_frame = table_addr.raw_u64() >> 8;
+ Ok(Self::default()
+ .set_is_pte(false)
+ .set_aperture(AperturePde::VideoMemory)
+ .set_table_frame(table_frame))
+ }
+
+ /// Check if this big PDE is valid.
+ pub(crate) fn is_valid(&self) -> bool {
+ self.aperture() != AperturePde::Invalid
+ }
+
+ /// Get the VRAM address of the big page table.
+ pub(crate) fn table_vram_address(&self) -> VramAddress {
+ debug_assert!(
+ self.aperture() == AperturePde::VideoMemory,
+ "table_vram_address called on non-VRAM DualPdeBig (aperture: {:?})",
+ self.aperture()
+ );
+ VramAddress::new(self.table_frame() << 8)
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ self.0
+ }
+}
+
+/// Dual PDE at Level 4 for MMU v3 - 128-bit entry.
+///
+/// Contains both big (64KB) and small (4KB) page table pointers:
+/// - Lower 64 bits: Big Page Table pointer.
+/// - Upper 64 bits: Small Page Table pointer.
+///
+/// ## Note
+///
+/// The big and small page table pointers have different address layouts:
+/// - Big address = field value << 8 (256-byte alignment).
+/// - Small address = field value << 12 (4KB alignment).
+///
+/// This is why `DualPdeBig` is a separate type from `Pde`.
+#[repr(C)]
+#[derive(Debug, Clone, Copy, Default)]
+pub(crate) struct DualPde {
+ /// Big Page Table pointer.
+ pub big: DualPdeBig,
+ /// Small Page Table pointer.
+ pub small: Pde,
+}
+
+impl DualPde {
+ /// Create a dual PDE from raw 128-bit value (two `u64`s).
+ pub(crate) fn new(big: u64, small: u64) -> Self {
+ Self {
+ big: DualPdeBig::new(big),
+ small: Pde::new(small),
+ }
+ }
+
+ /// Create a dual PDE with only the small page table pointer set.
+ pub(crate) fn new_small(table_pfn: Pfn) -> Self {
+ Self {
+ big: DualPdeBig::invalid(),
+ small: Pde::new_vram(table_pfn),
+ }
+ }
+
+ /// Check if the small page table pointer is valid.
+ pub(crate) fn has_small(&self) -> bool {
+ self.small.is_valid()
+ }
+
+ /// Check if the big page table pointer is valid.
+ pub(crate) fn has_big(&self) -> bool {
+ self.big.is_valid()
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 14/23] gpu: nova-core: mm: Add unified page table entry wrapper enums
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (12 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 13/23] gpu: nova-core: mm: Add MMU v3 " Joel Fernandes
@ 2026-03-11 0:39 ` Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 15/23] gpu: nova-core: mm: Add page table walker for MMU v2/v3 Joel Fernandes
` (8 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:39 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add unified Pte, Pde, and DualPde wrapper enums that abstract over
MMU v2 and v3 page table entry formats. These enums allow the page
table walker and VMM to work with both MMU versions.
Each unified type:
- Takes MmuVersion parameter in constructors
- Wraps both ver2 and ver3 variants
- Delegates method calls to the appropriate variant
This enables version-agnostic page table operations while keeping
version-specific implementation details encapsulated in the ver2
and ver3 modules.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 322 ++++++++++++++++++++++++++
1 file changed, 322 insertions(+)
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index 5c6ae78506af..8cc5f72ead11 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -12,6 +12,13 @@
pub(crate) mod ver3;
use crate::gpu::Architecture;
+use crate::mm::{
+ pramin,
+ Pfn,
+ VirtualAddress,
+ VramAddress, //
+};
+use kernel::prelude::*;
/// Extracts the page table index at a given level from a virtual address.
pub(crate) trait VaLevelIndex {
@@ -84,6 +91,96 @@ pub(crate) const fn as_index(&self) -> u64 {
}
}
+impl MmuVersion {
+ /// Get the `PDE` levels (excluding PTE level) for page table walking.
+ pub(crate) fn pde_levels(&self) -> &'static [PageTableLevel] {
+ match self {
+ Self::V2 => ver2::PDE_LEVELS,
+ Self::V3 => ver3::PDE_LEVELS,
+ }
+ }
+
+ /// Get the PTE level for this MMU version.
+ pub(crate) fn pte_level(&self) -> PageTableLevel {
+ match self {
+ Self::V2 => ver2::PTE_LEVEL,
+ Self::V3 => ver3::PTE_LEVEL,
+ }
+ }
+
+ /// Get the dual PDE level (128-bit entries) for this MMU version.
+ pub(crate) fn dual_pde_level(&self) -> PageTableLevel {
+ match self {
+ Self::V2 => ver2::DUAL_PDE_LEVEL,
+ Self::V3 => ver3::DUAL_PDE_LEVEL,
+ }
+ }
+
+ /// Get the number of PDE levels for this MMU version.
+ pub(crate) fn pde_level_count(&self) -> usize {
+ self.pde_levels().len()
+ }
+
+ /// Get the entry size in bytes for a given level.
+ pub(crate) fn entry_size(&self, level: PageTableLevel) -> usize {
+ if level == self.dual_pde_level() {
+ 16 // 128-bit dual PDE
+ } else {
+ 8 // 64-bit PDE/PTE
+ }
+ }
+
+ /// Get the number of entries per page table page for a given level.
+ pub(crate) fn entries_per_page(&self, level: PageTableLevel) -> usize {
+ match self {
+ Self::V2 => match level {
+ // TODO: Calculate these values from the bitfield dynamically
+ // instead of hardcoding them.
+ PageTableLevel::Pdb => 4, // PD3 root: bits [48:47] = 2 bits
+ PageTableLevel::L3 => 256, // PD0 dual: bits [28:21] = 8 bits
+ _ => 512, // PD2, PD1, PT: 9 bits each
+ },
+ Self::V3 => match level {
+ PageTableLevel::Pdb => 2, // PDE4 root: bit [56] = 1 bit, 2 entries
+ PageTableLevel::L4 => 256, // PDE0 dual: bits [28:21] = 8 bits
+ _ => 512, // PDE3, PDE2, PDE1, PT: 9 bits each
+ },
+ }
+ }
+
+ /// Extract the page table index at `level` from `va` for this MMU version.
+ pub(crate) fn level_index(&self, va: VirtualAddress, level: u64) -> u64 {
+ match self {
+ Self::V2 => ver2::VirtualAddressV2::new(va).level_index(level),
+ Self::V3 => ver3::VirtualAddressV3::new(va).level_index(level),
+ }
+ }
+
+ /// Compute upper bound on page table pages needed for `num_virt_pages`.
+ ///
+ /// Walks from PTE level up through PDE levels, accumulating the tree.
+ pub(crate) fn pt_pages_upper_bound(&self, num_virt_pages: usize) -> usize {
+ let mut total = 0;
+
+ // PTE pages at the leaf level.
+ let pte_epp = self.entries_per_page(self.pte_level());
+ let mut pages_at_level = num_virt_pages.div_ceil(pte_epp);
+ total += pages_at_level;
+
+ // Walk PDE levels bottom-up (reverse of pde_levels()).
+ for &level in self.pde_levels().iter().rev() {
+ let epp = self.entries_per_page(level);
+
+ // How many pages at this level do we need to point to
+ // the previous pages_at_level?
+ pages_at_level = pages_at_level.div_ceil(epp);
+ total += pages_at_level;
+ }
+
+ total
+ }
+}
+
/// Memory aperture for Page Table Entries (`PTE`s).
///
/// Determines which memory region the `PTE` points to.
@@ -156,3 +253,228 @@ fn from(val: AperturePde) -> Self {
val as u8
}
}
+
+/// Unified Page Table Entry wrapper for both MMU v2 and v3 `PTE`
+/// types, allowing the walker to work with either format.
+#[derive(Debug, Clone, Copy)]
+pub(crate) enum Pte {
+ /// MMU v2 `PTE` (Turing/Ampere/Ada).
+ V2(ver2::Pte),
+ /// MMU v3 `PTE` (Hopper+).
+ V3(ver3::Pte),
+}
+
+impl Pte {
+ /// Create a `PTE` from a raw `u64` value for the given MMU version.
+ pub(crate) fn new(version: MmuVersion, val: u64) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pte::new(val)),
+ MmuVersion::V3 => Self::V3(ver3::Pte::new(val)),
+ }
+ }
+
+ /// Create an invalid `PTE` for the given MMU version.
+ pub(crate) fn invalid(version: MmuVersion) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pte::invalid()),
+ MmuVersion::V3 => Self::V3(ver3::Pte::invalid()),
+ }
+ }
+
+ /// Create a valid `PTE` for video memory.
+ pub(crate) fn new_vram(version: MmuVersion, pfn: Pfn, writable: bool) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pte::new_vram(pfn, writable)),
+ MmuVersion::V3 => Self::V3(ver3::Pte::new_vram(pfn, writable)),
+ }
+ }
+
+ /// Check if this `PTE` is valid.
+ pub(crate) fn is_valid(&self) -> bool {
+ match self {
+ Self::V2(p) => p.valid(),
+ Self::V3(p) => p.valid(),
+ }
+ }
+
+ /// Get the physical frame number.
+ pub(crate) fn frame_number(&self) -> Pfn {
+ match self {
+ Self::V2(p) => p.frame_number(),
+ Self::V3(p) => p.frame_number(),
+ }
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ match self {
+ Self::V2(p) => p.raw_u64(),
+ Self::V3(p) => p.raw_u64(),
+ }
+ }
+
+ /// Read a `PTE` from VRAM.
+ pub(crate) fn read(
+ window: &mut pramin::PraminWindow<'_>,
+ addr: VramAddress,
+ mmu_version: MmuVersion,
+ ) -> Result<Self> {
+ let val = window.try_read64(addr.raw())?;
+ Ok(Self::new(mmu_version, val))
+ }
+
+ /// Write this `PTE` to VRAM.
+ pub(crate) fn write(&self, window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result {
+ window.try_write64(addr.raw(), self.raw_u64())
+ }
+}
+
+/// Unified Page Directory Entry wrapper for both MMU v2 and v3 `PDE`.
+#[derive(Debug, Clone, Copy)]
+pub(crate) enum Pde {
+ /// MMU v2 `PDE` (Turing/Ampere/Ada).
+ V2(ver2::Pde),
+ /// MMU v3 `PDE` (Hopper+).
+ V3(ver3::Pde),
+}
+
+impl Pde {
+ /// Create a `PDE` from a raw `u64` value for the given MMU version.
+ pub(crate) fn new(version: MmuVersion, val: u64) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pde::new(val)),
+ MmuVersion::V3 => Self::V3(ver3::Pde::new(val)),
+ }
+ }
+
+ /// Create a valid `PDE` pointing to a page table in video memory.
+ pub(crate) fn new_vram(version: MmuVersion, table_pfn: Pfn) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pde::new_vram(table_pfn)),
+ MmuVersion::V3 => Self::V3(ver3::Pde::new_vram(table_pfn)),
+ }
+ }
+
+ /// Create an invalid `PDE` for the given MMU version.
+ pub(crate) fn invalid(version: MmuVersion) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::Pde::invalid()),
+ MmuVersion::V3 => Self::V3(ver3::Pde::invalid()),
+ }
+ }
+
+ /// Check if this `PDE` is valid.
+ pub(crate) fn is_valid(&self) -> bool {
+ match self {
+ Self::V2(p) => p.is_valid(),
+ Self::V3(p) => p.is_valid(),
+ }
+ }
+
+ /// Get the VRAM address of the page table.
+ pub(crate) fn table_vram_address(&self) -> VramAddress {
+ match self {
+ Self::V2(p) => p.table_vram_address(),
+ Self::V3(p) => p.table_vram_address(),
+ }
+ }
+
+ /// Get the raw `u64` value.
+ pub(crate) fn raw_u64(&self) -> u64 {
+ match self {
+ Self::V2(p) => p.raw_u64(),
+ Self::V3(p) => p.raw_u64(),
+ }
+ }
+
+ /// Read a `PDE` from VRAM.
+ pub(crate) fn read(
+ window: &mut pramin::PraminWindow<'_>,
+ addr: VramAddress,
+ mmu_version: MmuVersion,
+ ) -> Result<Self> {
+ let val = window.try_read64(addr.raw())?;
+ Ok(Self::new(mmu_version, val))
+ }
+
+ /// Write this `PDE` to VRAM.
+ pub(crate) fn write(&self, window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result {
+ window.try_write64(addr.raw(), self.raw_u64())
+ }
+}
+
+/// Unified Dual Page Directory Entry wrapper for both MMU v2 and v3 [`DualPde`].
+#[derive(Debug, Clone, Copy)]
+pub(crate) enum DualPde {
+ /// MMU v2 [`DualPde`] (Turing/Ampere/Ada).
+ V2(ver2::DualPde),
+ /// MMU v3 [`DualPde`] (Hopper+).
+ V3(ver3::DualPde),
+}
+
+impl DualPde {
+ /// Create a [`DualPde`] from raw 128-bit value (two `u64`s) for the given MMU version.
+ pub(crate) fn new(version: MmuVersion, big: u64, small: u64) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::DualPde::new(big, small)),
+ MmuVersion::V3 => Self::V3(ver3::DualPde::new(big, small)),
+ }
+ }
+
+ /// Create a [`DualPde`] with only the small page table pointer set.
+ pub(crate) fn new_small(version: MmuVersion, table_pfn: Pfn) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(ver2::DualPde::new_small(table_pfn)),
+ MmuVersion::V3 => Self::V3(ver3::DualPde::new_small(table_pfn)),
+ }
+ }
+
+ /// Check if the small page table pointer is valid.
+ pub(crate) fn has_small(&self) -> bool {
+ match self {
+ Self::V2(d) => d.has_small(),
+ Self::V3(d) => d.has_small(),
+ }
+ }
+
+ /// Get the small page table VRAM address.
+ pub(crate) fn small_vram_address(&self) -> VramAddress {
+ match self {
+ Self::V2(d) => d.small.table_vram_address(),
+ Self::V3(d) => d.small.table_vram_address(),
+ }
+ }
+
+ /// Get the raw `u64` value of the big PDE.
+ pub(crate) fn big_raw_u64(&self) -> u64 {
+ match self {
+ Self::V2(d) => d.big.raw_u64(),
+ Self::V3(d) => d.big.raw_u64(),
+ }
+ }
+
+ /// Get the raw `u64` value of the small PDE.
+ pub(crate) fn small_raw_u64(&self) -> u64 {
+ match self {
+ Self::V2(d) => d.small.raw_u64(),
+ Self::V3(d) => d.small.raw_u64(),
+ }
+ }
+
+ /// Read a dual PDE (128-bit) from VRAM.
+ pub(crate) fn read(
+ window: &mut pramin::PraminWindow<'_>,
+ addr: VramAddress,
+ mmu_version: MmuVersion,
+ ) -> Result<Self> {
+ let lo = window.try_read64(addr.raw())?;
+ let hi = window.try_read64(addr.raw() + 8)?;
+ Ok(Self::new(mmu_version, lo, hi))
+ }
+
+ /// Write this dual PDE (128-bit) to VRAM.
+ pub(crate) fn write(&self, window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result {
+ window.try_write64(addr.raw(), self.big_raw_u64())?;
+ window.try_write64(addr.raw() + 8, self.small_raw_u64())
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 15/23] gpu: nova-core: mm: Add page table walker for MMU v2/v3
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (13 preceding siblings ...)
2026-03-11 0:39 ` [PATCH v9 14/23] gpu: nova-core: mm: Add unified page table entry wrapper enums Joel Fernandes
@ 2026-03-11 0:40 ` Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 16/23] gpu: nova-core: mm: Add Virtual Memory Manager Joel Fernandes
` (7 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:40 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add the page table walker implementation that traverses the page table
hierarchy for both MMU v2 (5-level) and MMU v3 (6-level) to resolve
virtual addresses to physical addresses or find PTE locations.
Currently only v2 has been tested (nova-core currently boots pre-hopper)
with some initial prepatory work done for v3.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 1 +
drivers/gpu/nova-core/mm/pagetable/walk.rs | 218 +++++++++++++++++++++
2 files changed, 219 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/pagetable/walk.rs
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index 8cc5f72ead11..f9996dcac7c6 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -10,6 +10,7 @@
pub(crate) mod ver2;
pub(crate) mod ver3;
+pub(crate) mod walk;
use crate::gpu::Architecture;
use crate::mm::{
diff --git a/drivers/gpu/nova-core/mm/pagetable/walk.rs b/drivers/gpu/nova-core/mm/pagetable/walk.rs
new file mode 100644
index 000000000000..c68eb226b9b1
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable/walk.rs
@@ -0,0 +1,218 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Page table walker implementation for NVIDIA GPUs.
+//!
+//! This module provides page table walking functionality for MMU v2 and v3.
+//! The walker traverses the page table hierarchy to resolve virtual addresses
+//! to physical addresses or to find PTE locations.
+//!
+//! # Page Table Hierarchy
+//!
+//! ## MMU v2 (Turing/Ampere/Ada) - 5 levels
+//!
+//! ```text
+//! +-------+ +-------+ +-------+ +---------+ +-------+
+//! | PDB |---->| L1 |---->| L2 |---->| L3 Dual |---->| L4 |
+//! | (L0) | | | | | | PDE | | (PTE) |
+//! +-------+ +-------+ +-------+ +---------+ +-------+
+//! 64-bit 64-bit 64-bit 128-bit 64-bit
+//! PDE PDE PDE (big+small) PTE
+//! ```
+//!
+//! ## MMU v3 (Hopper+) - 6 levels
+//!
+//! ```text
+//! +-------+ +-------+ +-------+ +-------+ +---------+ +-------+
+//! | PDB |---->| L1 |---->| L2 |---->| L3 |---->| L4 Dual |---->| L5 |
+//! | (L0) | | | | | | | | PDE | | (PTE) |
+//! +-------+ +-------+ +-------+ +-------+ +---------+ +-------+
+//! 64-bit 64-bit 64-bit 64-bit 128-bit 64-bit
+//! PDE PDE PDE PDE (big+small) PTE
+//! ```
+//!
+//! # Result of a page table walk
+//!
+//! The walker returns a [`WalkResult`] indicating the outcome.
+
+use kernel::prelude::*;
+
+use super::{
+ DualPde,
+ MmuVersion,
+ PageTableLevel,
+ Pde,
+ Pte, //
+};
+use crate::{
+ mm::{
+ pramin,
+ GpuMm,
+ Pfn,
+ Vfn,
+ VirtualAddress,
+ VramAddress, //
+ },
+ num::{
+ IntoSafeCast, //
+ },
+};
+
+/// Result of walking to a PTE.
+#[derive(Debug, Clone, Copy)]
+pub(crate) enum WalkResult {
+ /// Intermediate page tables are missing (only returned in lookup mode).
+ PageTableMissing,
+ /// PTE exists but is invalid (page not mapped).
+ Unmapped { pte_addr: VramAddress },
+ /// PTE exists and is valid (page is mapped).
+ Mapped { pte_addr: VramAddress, pfn: Pfn },
+}
+
+/// Result of walking PDE levels only.
+///
+/// Returned by [`PtWalk::walk_pde_levels()`] to indicate whether all PDE levels
+/// resolved or a PDE is missing.
+#[derive(Debug, Clone, Copy)]
+pub(crate) enum WalkPdeResult {
+ /// All PDE levels resolved -- returns PTE page table address.
+ Complete {
+ /// VRAM address of the PTE-level page table.
+ pte_table: VramAddress,
+ },
+ /// A PDE is missing and no prepared page was provided by the closure.
+ Missing {
+ /// PDE slot address in the parent page table (where to install).
+ install_addr: VramAddress,
+ /// The page table level that is missing.
+ level: PageTableLevel,
+ },
+}
+
+/// Page table walker for NVIDIA GPUs.
+///
+/// Walks the page table hierarchy (5 levels for v2, 6 for v3) to find PTE
+/// locations or resolve virtual addresses.
+pub(crate) struct PtWalk {
+ pdb_addr: VramAddress,
+ mmu_version: MmuVersion,
+}
+
+impl PtWalk {
+ /// Calculate the VRAM address of an entry within a page table.
+ fn entry_addr(
+ table: VramAddress,
+ mmu_version: MmuVersion,
+ level: PageTableLevel,
+ index: u64,
+ ) -> VramAddress {
+ let entry_size: u64 = mmu_version.entry_size(level).into_safe_cast();
+ VramAddress::new(table.raw_u64() + index * entry_size)
+ }
+
+ /// Create a new page table walker.
+ pub(crate) fn new(pdb_addr: VramAddress, mmu_version: MmuVersion) -> Self {
+ Self {
+ pdb_addr,
+ mmu_version,
+ }
+ }
+
+ /// Walk PDE levels with closure-based resolution for missing PDEs.
+ ///
+ /// Traverses all PDE levels for the MMU version. At each level, reads the PDE.
+ /// If valid, extracts the child table address and continues. If missing, calls
+ /// `resolve_prepared(install_addr)` to resolve the missing PDE.
+ pub(crate) fn walk_pde_levels(
+ &self,
+ window: &mut pramin::PraminWindow<'_>,
+ vfn: Vfn,
+ resolve_prepared: impl Fn(VramAddress) -> Option<VramAddress>,
+ ) -> Result<WalkPdeResult> {
+ let va = VirtualAddress::from(vfn);
+ let mut cur_table = self.pdb_addr;
+
+ for &level in self.mmu_version.pde_levels() {
+ let idx = self.mmu_version.level_index(va, level.as_index());
+ let install_addr = Self::entry_addr(cur_table, self.mmu_version, level, idx);
+
+ if level == self.mmu_version.dual_pde_level() {
+ // 128-bit dual PDE with big+small page table pointers.
+ let dpde = DualPde::read(window, install_addr, self.mmu_version)?;
+ if dpde.has_small() {
+ cur_table = dpde.small_vram_address();
+ continue;
+ }
+ } else {
+ // Regular 64-bit PDE.
+ let pde = Pde::read(window, install_addr, self.mmu_version)?;
+ if pde.is_valid() {
+ cur_table = pde.table_vram_address();
+ continue;
+ }
+ }
+
+ // PDE missing in HW. Ask caller for resolution.
+ if let Some(prepared_addr) = resolve_prepared(install_addr) {
+ cur_table = prepared_addr;
+ continue;
+ }
+
+ return Ok(WalkPdeResult::Missing {
+ install_addr,
+ level,
+ });
+ }
+
+ Ok(WalkPdeResult::Complete {
+ pte_table: cur_table,
+ })
+ }
+
+ /// Walk to PTE for lookup only (no allocation).
+ ///
+ /// Returns [`WalkResult::PageTableMissing`] if intermediate tables don't exist.
+ pub(crate) fn walk_to_pte_lookup(&self, mm: &GpuMm, vfn: Vfn) -> Result<WalkResult> {
+ let mut window = mm.pramin().window()?;
+ self.walk_to_pte_lookup_with_window(&mut window, vfn)
+ }
+
+ /// Walk to PTE using a caller-provided PRAMIN window (lookup only).
+ ///
+ /// Uses [`PtWalk::walk_pde_levels()`] for the PDE traversal, then reads the PTE at
+ /// the leaf level. Useful when called for multiple VFNs with single PRAMIN window
+ /// acquisition. Used by [`Vmm::execute_map()`] and [`Vmm::unmap_pages()`].
+ pub(crate) fn walk_to_pte_lookup_with_window(
+ &self,
+ window: &mut pramin::PraminWindow<'_>,
+ vfn: Vfn,
+ ) -> Result<WalkResult> {
+ match self.walk_pde_levels(window, vfn, |_| None)? {
+ WalkPdeResult::Complete { pte_table } => {
+ Self::read_pte_at_level(window, vfn, pte_table, self.mmu_version)
+ }
+ WalkPdeResult::Missing { .. } => Ok(WalkResult::PageTableMissing),
+ }
+ }
+
+ /// Read the PTE at the PTE level given the PTE table address.
+ fn read_pte_at_level(
+ window: &mut pramin::PraminWindow<'_>,
+ vfn: Vfn,
+ pte_table: VramAddress,
+ mmu_version: MmuVersion,
+ ) -> Result<WalkResult> {
+ let va = VirtualAddress::from(vfn);
+ let pte_level = mmu_version.pte_level();
+ let pte_idx = mmu_version.level_index(va, pte_level.as_index());
+ let pte_addr = Self::entry_addr(pte_table, mmu_version, pte_level, pte_idx);
+ let pte = Pte::read(window, pte_addr, mmu_version)?;
+
+ if pte.is_valid() {
+ return Ok(WalkResult::Mapped {
+ pte_addr,
+ pfn: pte.frame_number(),
+ });
+ }
+ Ok(WalkResult::Unmapped { pte_addr })
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 16/23] gpu: nova-core: mm: Add Virtual Memory Manager
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (14 preceding siblings ...)
2026-03-11 0:40 ` [PATCH v9 15/23] gpu: nova-core: mm: Add page table walker for MMU v2/v3 Joel Fernandes
@ 2026-03-11 0:40 ` Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 17/23] gpu: nova-core: mm: Add virtual address range tracking to VMM Joel Fernandes
` (6 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:40 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add the Virtual Memory Manager (VMM) infrastructure for GPU address
space management. Each Vmm instance manages a single address space
identified by its Page Directory Base (PDB) address, used for Channel,
BAR1 and BAR2 mappings.
Mapping APIs and virtual address range tracking are added in later
commits.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/vmm.rs | 61 +++++++++++++++++++++++++++++++++
2 files changed, 62 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/vmm.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index b0aad90e94bc..6e58f597fadd 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -7,6 +7,7 @@
pub(crate) mod pagetable;
pub(crate) mod pramin;
pub(crate) mod tlb;
+pub(crate) mod vmm;
use kernel::{
devres::Devres,
diff --git a/drivers/gpu/nova-core/mm/vmm.rs b/drivers/gpu/nova-core/mm/vmm.rs
new file mode 100644
index 000000000000..f0e6ffbe2b7a
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/vmm.rs
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Virtual Memory Manager for NVIDIA GPU page table management.
+//!
+//! The [`Vmm`] provides high-level page mapping and unmapping operations for GPU
+//! virtual address spaces (Channels, BAR1, BAR2). It wraps the page table walker
+//! and handles TLB flushing after modifications.
+
+use kernel::{
+ gpu::buddy::AllocatedBlocks,
+ prelude::*, //
+};
+
+use crate::mm::{
+ pagetable::{
+ walk::{PtWalk, WalkResult},
+ MmuVersion, //
+ },
+ GpuMm,
+ Pfn,
+ Vfn,
+ VramAddress, //
+};
+
+/// Virtual Memory Manager for a GPU address space.
+///
+/// Each [`Vmm`] instance manages a single address space identified by its Page
+/// Directory Base (`PDB`) address. The [`Vmm`] is used for Channel, BAR1 and
+/// BAR2 mappings.
+pub(crate) struct Vmm {
+ pub(crate) pdb_addr: VramAddress,
+ pub(crate) mmu_version: MmuVersion,
+ /// Page table allocations required for mappings.
+ page_table_allocs: KVec<Pin<KBox<AllocatedBlocks>>>,
+}
+
+impl Vmm {
+ /// Create a new [`Vmm`] for the given Page Directory Base address.
+ pub(crate) fn new(pdb_addr: VramAddress, mmu_version: MmuVersion) -> Result<Self> {
+ // Only MMU v2 is supported for now.
+ if mmu_version != MmuVersion::V2 {
+ return Err(ENOTSUPP);
+ }
+
+ Ok(Self {
+ pdb_addr,
+ mmu_version,
+ page_table_allocs: KVec::new(),
+ })
+ }
+
+ /// Read the [`Pfn`] for a mapped [`Vfn`] if one is mapped.
+ pub(crate) fn read_mapping(&self, mm: &GpuMm, vfn: Vfn) -> Result<Option<Pfn>> {
+ let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
+
+ match walker.walk_to_pte_lookup(mm, vfn)? {
+ WalkResult::Mapped { pfn, .. } => Ok(Some(pfn)),
+ WalkResult::Unmapped { .. } | WalkResult::PageTableMissing => Ok(None),
+ }
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 17/23] gpu: nova-core: mm: Add virtual address range tracking to VMM
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (15 preceding siblings ...)
2026-03-11 0:40 ` [PATCH v9 16/23] gpu: nova-core: mm: Add Virtual Memory Manager Joel Fernandes
@ 2026-03-11 0:40 ` Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 18/23] gpu: nova-core: mm: Add multi-page mapping API " Joel Fernandes
` (5 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:40 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add virtual address range tracking to the VMM using a buddy allocator.
This enables contiguous virtual address range allocation for mappings.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/vmm.rs | 99 +++++++++++++++++++++++++++++----
1 file changed, 88 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/nova-core/mm/vmm.rs b/drivers/gpu/nova-core/mm/vmm.rs
index f0e6ffbe2b7a..78e614d8829d 100644
--- a/drivers/gpu/nova-core/mm/vmm.rs
+++ b/drivers/gpu/nova-core/mm/vmm.rs
@@ -7,19 +7,35 @@
//! and handles TLB flushing after modifications.
use kernel::{
- gpu::buddy::AllocatedBlocks,
- prelude::*, //
+ gpu::buddy::{
+ AllocatedBlocks,
+ GpuBuddy,
+ GpuBuddyAllocFlag,
+ GpuBuddyAllocMode,
+ GpuBuddyParams, //
+ },
+ prelude::*,
+ ptr::Alignment,
+ sizes::SZ_4K, //
};
-use crate::mm::{
- pagetable::{
- walk::{PtWalk, WalkResult},
- MmuVersion, //
+use core::ops::Range;
+
+use crate::{
+ mm::{
+ pagetable::{
+ walk::{PtWalk, WalkResult},
+ MmuVersion, //
+ },
+ GpuMm,
+ Pfn,
+ Vfn,
+ VramAddress,
+ PAGE_SIZE, //
+ },
+ num::{
+ IntoSafeCast, //
},
- GpuMm,
- Pfn,
- Vfn,
- VramAddress, //
};
/// Virtual Memory Manager for a GPU address space.
@@ -32,23 +48,84 @@ pub(crate) struct Vmm {
pub(crate) mmu_version: MmuVersion,
/// Page table allocations required for mappings.
page_table_allocs: KVec<Pin<KBox<AllocatedBlocks>>>,
+ /// Buddy allocator for virtual address range tracking.
+ virt_buddy: GpuBuddy,
}
impl Vmm {
/// Create a new [`Vmm`] for the given Page Directory Base address.
- pub(crate) fn new(pdb_addr: VramAddress, mmu_version: MmuVersion) -> Result<Self> {
+ ///
+ /// The [`Vmm`] will manage a virtual address space of `va_size` bytes.
+ pub(crate) fn new(
+ pdb_addr: VramAddress,
+ mmu_version: MmuVersion,
+ va_size: u64,
+ ) -> Result<Self> {
// Only MMU v2 is supported for now.
if mmu_version != MmuVersion::V2 {
return Err(ENOTSUPP);
}
+ let virt_buddy = GpuBuddy::new(GpuBuddyParams {
+ base_offset: 0,
+ physical_memory_size: va_size,
+ chunk_size: SZ_4K,
+ })?;
+
Ok(Self {
pdb_addr,
mmu_version,
page_table_allocs: KVec::new(),
+ virt_buddy,
})
}
+ /// Allocate a contiguous virtual frame number range.
+ ///
+ /// # Arguments
+ ///
+ /// - `num_pages`: Number of pages to allocate.
+ /// - `va_range`: `None` = allocate anywhere, `Some(range)` = constrain allocation to the given
+ /// range.
+ pub(crate) fn alloc_vfn_range(
+ &self,
+ num_pages: usize,
+ va_range: Option<Range<u64>>,
+ ) -> Result<(Vfn, Pin<KBox<AllocatedBlocks>>)> {
+ let size = num_pages.checked_mul(PAGE_SIZE).ok_or(EOVERFLOW)?;
+
+ let mode = match va_range {
+ Some(r) => {
+ let range_size = r.end.checked_sub(r.start).ok_or(EOVERFLOW)?;
+ if range_size != size.into_safe_cast() {
+ return Err(EINVAL);
+ }
+ GpuBuddyAllocMode::Range {
+ start: r.start,
+ end: r.end,
+ }
+ }
+ None => GpuBuddyAllocMode::Simple,
+ };
+
+ let alloc = KBox::pin_init(
+ self.virt_buddy.alloc_blocks(
+ mode,
+ size,
+ Alignment::new::<SZ_4K>(),
+ GpuBuddyAllocFlag::Contiguous,
+ ),
+ GFP_KERNEL,
+ )?;
+
+ // Get the starting offset of the first block (only block as range is contiguous).
+ let offset = alloc.iter().next().ok_or(ENOMEM)?.offset();
+ let page_size: u64 = PAGE_SIZE.into_safe_cast();
+ let vfn = Vfn::new(offset / page_size);
+
+ Ok((vfn, alloc))
+ }
+
/// Read the [`Pfn`] for a mapped [`Vfn`] if one is mapped.
pub(crate) fn read_mapping(&self, mm: &GpuMm, vfn: Vfn) -> Result<Option<Pfn>> {
let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 18/23] gpu: nova-core: mm: Add multi-page mapping API to VMM
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (16 preceding siblings ...)
2026-03-11 0:40 ` [PATCH v9 17/23] gpu: nova-core: mm: Add virtual address range tracking to VMM Joel Fernandes
@ 2026-03-11 0:40 ` Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 19/23] gpu: nova-core: Add BAR1 aperture type and size constant Joel Fernandes
` (4 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:40 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add the page table mapping and unmapping API to the Virtual Memory
Manager, implementing a two-phase prepare/execute model suitable for
use both inside and outside the DMA fence signalling critical path.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/vmm.rs | 366 +++++++++++++++++++++++++++++++-
1 file changed, 363 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/nova-core/mm/vmm.rs b/drivers/gpu/nova-core/mm/vmm.rs
index 78e614d8829d..95ee3496e0a6 100644
--- a/drivers/gpu/nova-core/mm/vmm.rs
+++ b/drivers/gpu/nova-core/mm/vmm.rs
@@ -11,21 +11,34 @@
AllocatedBlocks,
GpuBuddy,
GpuBuddyAllocFlag,
+ GpuBuddyAllocFlags,
GpuBuddyAllocMode,
GpuBuddyParams, //
},
prelude::*,
ptr::Alignment,
+ rbtree::{RBTree, RBTreeNode},
sizes::SZ_4K, //
};
-use core::ops::Range;
+use core::{
+ cell::Cell,
+ ops::Range, //
+};
use crate::{
mm::{
pagetable::{
- walk::{PtWalk, WalkResult},
- MmuVersion, //
+ walk::{
+ PtWalk,
+ WalkPdeResult,
+ WalkResult, //
+ },
+ DualPde,
+ MmuVersion,
+ PageTableLevel,
+ Pde,
+ Pte, //
},
GpuMm,
Pfn,
@@ -50,6 +63,74 @@ pub(crate) struct Vmm {
page_table_allocs: KVec<Pin<KBox<AllocatedBlocks>>>,
/// Buddy allocator for virtual address range tracking.
virt_buddy: GpuBuddy,
+ /// Prepared PT pages pending PDE installation, keyed by `install_addr`.
+ ///
+ /// Populated by `Vmm` mapping prepare phase and drained in the execute phase.
+ /// Shared by all pending maps in the `Vmm`, thus preventing races where 2
+ /// maps might be trying to install the same page table/directory entry pointer.
+ pt_pages: RBTree<VramAddress, PreparedPtPage>,
+}
+
+/// A pre-allocated and zeroed page table page.
+///
+/// Created during the mapping prepare phase and consumed during the mapping execute phase.
+/// Stored in an [`RBTree`] keyed by the PDE slot address (`install_addr`).
+struct PreparedPtPage {
+ /// The allocated and zeroed page table page.
+ alloc: Pin<KBox<AllocatedBlocks>>,
+ /// Page table level -- needed to determine if this PT page is for a dual PDE.
+ level: PageTableLevel,
+}
+
+/// Multi-page prepared mapping -- VA range allocated, ready for execute.
+///
+/// Produced by [`Vmm::prepare_map()`], consumed by [`Vmm::execute_map()`].
+/// The struct owns the VA space allocation between prepare and execute phases.
+pub(crate) struct PreparedMapping {
+ vfn_start: Vfn,
+ num_pages: usize,
+ vfn_alloc: Pin<KBox<AllocatedBlocks>>,
+}
+
+/// Result of a mapping operation -- tracks the active mapped range.
+///
+/// Returned by [`Vmm::execute_map()`] and [`Vmm::map_pages()`].
+/// Owns the VA allocation; the VA range is freed when this is dropped.
+/// Callers must call [`Vmm::unmap_pages()`] before dropping to invalidate
+/// PTEs (dropping only frees the VA range, not the PTE entries).
+pub(crate) struct MappedRange {
+ pub(crate) vfn_start: Vfn,
+ pub(crate) num_pages: usize,
+ /// VA allocation -- freed when [`MappedRange`] is dropped.
+ _vfn_alloc: Pin<KBox<AllocatedBlocks>>,
+ /// Logs a warning if dropped without unmapping.
+ _drop_guard: MustUnmapGuard,
+}
+
+/// Guard that logs a warning once if a [`MappedRange`] is dropped without
+/// calling [`Vmm::unmap_pages()`].
+struct MustUnmapGuard {
+ armed: Cell<bool>,
+}
+
+impl MustUnmapGuard {
+ const fn new() -> Self {
+ Self {
+ armed: Cell::new(true),
+ }
+ }
+
+ fn disarm(&self) {
+ self.armed.set(false);
+ }
+}
+
+impl Drop for MustUnmapGuard {
+ fn drop(&mut self) {
+ if self.armed.get() {
+ kernel::pr_warn!("MappedRange dropped without calling unmap_pages()\n");
+ }
+ }
}
impl Vmm {
@@ -77,6 +158,7 @@ pub(crate) fn new(
mmu_version,
page_table_allocs: KVec::new(),
virt_buddy,
+ pt_pages: RBTree::new(),
})
}
@@ -135,4 +217,282 @@ pub(crate) fn read_mapping(&self, mm: &GpuMm, vfn: Vfn) -> Result<Option<Pfn>> {
WalkResult::Unmapped { .. } | WalkResult::PageTableMissing => Ok(None),
}
}
+
+ /// Allocate and zero a physical page table page for a specific PDE slot.
+ /// Called during the map prepare phase.
+ fn alloc_and_zero_page_table(
+ &mut self,
+ mm: &GpuMm,
+ level: PageTableLevel,
+ ) -> Result<PreparedPtPage> {
+ let blocks = KBox::pin_init(
+ mm.buddy().alloc_blocks(
+ GpuBuddyAllocMode::Simple,
+ SZ_4K,
+ Alignment::new::<SZ_4K>(),
+ GpuBuddyAllocFlags::default(),
+ ),
+ GFP_KERNEL,
+ )?;
+
+ // Get page's VRAM address from the allocation.
+ let page_vram = VramAddress::new(blocks.iter().next().ok_or(ENOMEM)?.offset());
+
+ // Zero via PRAMIN.
+ let mut window = mm.pramin().window()?;
+ let base = page_vram.raw();
+ for off in (0..PAGE_SIZE).step_by(8) {
+ window.try_write64(base + off, 0)?;
+ }
+
+ Ok(PreparedPtPage {
+ alloc: blocks,
+ level,
+ })
+ }
+
+ /// Ensure all intermediate page table pages are prepared for a [`Vfn`]. Just
+ /// finds out which PDE pages are missing, allocates pages for them, and defers
+ /// installation to the execute phase.
+ ///
+ /// PRAMIN is released before each allocation and re-acquired after. Memory
+ /// allocations are done outside of holding this lock to prevent deadlocks with
+ /// the fence signalling critical path.
+ fn ensure_pte_path(&mut self, mm: &GpuMm, vfn: Vfn) -> Result {
+ let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
+ let max_iter = 2 * self.mmu_version.pde_level_count();
+
+ // Keep looping until all PDE levels are resolved.
+ for _ in 0..max_iter {
+ let mut window = mm.pramin().window()?;
+
+ // Walk PDE levels. The closure checks self.pt_pages for prepared-but-uninstalled
+ // pages, letting the walker continue through them as if they were installed in HW.
+ // The walker keeps calling the closure to get these "prepared but not installed" pages.
+ let result = walker.walk_pde_levels(&mut window, vfn, |install_addr| {
+ self.pt_pages
+ .get(&install_addr)
+ .and_then(|p| Some(VramAddress::new(p.alloc.iter().next()?.offset())))
+ })?;
+
+ match result {
+ WalkPdeResult::Complete { .. } => {
+ // All PDE levels resolved.
+ return Ok(());
+ }
+ WalkPdeResult::Missing {
+ install_addr,
+ level,
+ } => {
+ // Drop PRAMIN before allocation.
+ drop(window);
+ let page = self.alloc_and_zero_page_table(mm, level)?;
+ let node = RBTreeNode::new(install_addr, page, GFP_KERNEL)?;
+ let old = self.pt_pages.insert(node);
+ if old.is_some() {
+ kernel::pr_warn_once!(
+ "VMM: duplicate install_addr in pt_pages (internal consistency error)\n"
+ );
+ return Err(EIO);
+ }
+
+ // Loop: re-acquire PRAMIN and re-walk from root.
+ }
+ }
+ }
+
+ kernel::pr_warn!(
+ "VMM: ensure_pte_path: loop exhausted after {} iters (VFN {:?})\n",
+ max_iter,
+ vfn
+ );
+ Err(EIO)
+ }
+
+ /// Prepare resources for mapping `num_pages` pages.
+ ///
+ /// Allocates a contiguous VA range, then walks the hierarchy per-VFN to prepare pages
+ /// for all missing PDEs. Returns a [`PreparedMapping`] with the VA allocation.
+ ///
+ /// If `va_range` is not `None`, the VA range is constrained to the given range. Safe
+ /// to call outside the fence signalling critical path.
+ pub(crate) fn prepare_map(
+ &mut self,
+ mm: &GpuMm,
+ num_pages: usize,
+ va_range: Option<Range<u64>>,
+ ) -> Result<PreparedMapping> {
+ if num_pages == 0 {
+ return Err(EINVAL);
+ }
+
+ // Pre-reserve so execute_map() can use push_within_capacity (no alloc in
+ // fence signalling critical path).
+ // Upper bound on page table pages needed for the full tree (PTE pages + PDE
+ // pages at all levels).
+ let pt_upper_bound = self.mmu_version.pt_pages_upper_bound(num_pages);
+ self.page_table_allocs.reserve(pt_upper_bound, GFP_KERNEL)?;
+
+ // Allocate contiguous VA range.
+ let (vfn_start, vfn_alloc) = self.alloc_vfn_range(num_pages, va_range)?;
+
+ // Walk the hierarchy per-VFN to prepare pages for all missing PDEs.
+ for i in 0..num_pages {
+ let i_u64: u64 = i.into_safe_cast();
+ let vfn = Vfn::new(vfn_start.raw() + i_u64);
+ self.ensure_pte_path(mm, vfn)?;
+ }
+
+ Ok(PreparedMapping {
+ vfn_start,
+ num_pages,
+ vfn_alloc,
+ })
+ }
+
+ /// Execute a prepared multi-page mapping.
+ ///
+ /// Drain prepared PT pages and install PDEs followed by single TLB flush.
+ pub(crate) fn execute_map(
+ &mut self,
+ mm: &GpuMm,
+ prepared: PreparedMapping,
+ pfns: &[Pfn],
+ writable: bool,
+ ) -> Result<MappedRange> {
+ if pfns.len() != prepared.num_pages {
+ return Err(EINVAL);
+ }
+
+ let PreparedMapping {
+ vfn_start,
+ num_pages,
+ vfn_alloc,
+ } = prepared;
+
+ let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
+ let mut window = mm.pramin().window()?;
+
+ // First, drain self.pt_pages, install all pending PDEs.
+ let mut cursor = self.pt_pages.cursor_front_mut();
+ while let Some(c) = cursor {
+ let (next, node) = c.remove_current();
+ let (install_addr, page) = node.to_key_value();
+ let page_vram = VramAddress::new(page.alloc.iter().next().ok_or(ENOMEM)?.offset());
+
+ if page.level == self.mmu_version.dual_pde_level() {
+ let new_dpde = DualPde::new_small(self.mmu_version, Pfn::from(page_vram));
+ new_dpde.write(&mut window, install_addr)?;
+ } else {
+ let new_pde = Pde::new_vram(self.mmu_version, Pfn::from(page_vram));
+ new_pde.write(&mut window, install_addr)?;
+ }
+
+ // Track the allocated pages in the `Vmm`.
+ self.page_table_allocs
+ .push_within_capacity(page.alloc)
+ .map_err(|_| ENOMEM)?;
+
+ cursor = next;
+ }
+
+ // Next, write PTEs (all PDEs now installed in HW).
+ for (i, &pfn) in pfns.iter().enumerate() {
+ let i_u64: u64 = i.into_safe_cast();
+ let vfn = Vfn::new(vfn_start.raw() + i_u64);
+ let result = walker.walk_to_pte_lookup_with_window(&mut window, vfn)?;
+
+ match result {
+ WalkResult::Unmapped { pte_addr } | WalkResult::Mapped { pte_addr, .. } => {
+ let pte = Pte::new_vram(self.mmu_version, pfn, writable);
+ pte.write(&mut window, pte_addr)?;
+ }
+ WalkResult::PageTableMissing => {
+ kernel::pr_warn_once!("VMM: page table missing for VFN {vfn:?}\n");
+ return Err(EIO);
+ }
+ }
+ }
+
+ drop(window);
+
+ // Finally, flush the TLB.
+ mm.tlb().flush(self.pdb_addr)?;
+
+ Ok(MappedRange {
+ vfn_start,
+ num_pages,
+ _vfn_alloc: vfn_alloc,
+ _drop_guard: MustUnmapGuard::new(),
+ })
+ }
+
+ /// Map pages doing prepare and execute in the same call.
+ ///
+ /// This is a convenience wrapper for callers outside the fence signalling critical
+ /// path (e.g., BAR mappings). For DRM usecases, [`Vmm::prepare_map()`] and
+ /// [`Vmm::execute_map()`] will be called separately.
+ pub(crate) fn map_pages(
+ &mut self,
+ mm: &GpuMm,
+ pfns: &[Pfn],
+ va_range: Option<Range<u64>>,
+ writable: bool,
+ ) -> Result<MappedRange> {
+ if pfns.is_empty() {
+ return Err(EINVAL);
+ }
+
+ // Check if provided VA range is sufficient (if provided).
+ if let Some(ref range) = va_range {
+ let required: u64 = pfns
+ .len()
+ .checked_mul(PAGE_SIZE)
+ .ok_or(EOVERFLOW)?
+ .into_safe_cast();
+ let available = range.end.checked_sub(range.start).ok_or(EINVAL)?;
+ if available < required {
+ return Err(EINVAL);
+ }
+ }
+
+ let prepared = self.prepare_map(mm, pfns.len(), va_range)?;
+ self.execute_map(mm, prepared, pfns, writable)
+ }
+
+ /// Unmap all pages in a [`MappedRange`] with a single TLB flush.
+ ///
+ /// Takes the range by value (consumes it), then invalidates PTEs for the range,
+ /// flushes the TLB, then drops the range (freeing the VA). PRAMIN lock is held.
+ pub(crate) fn unmap_pages(&mut self, mm: &GpuMm, range: MappedRange) -> Result {
+ let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
+ let invalid_pte = Pte::invalid(self.mmu_version);
+
+ let mut window = mm.pramin().window()?;
+ for i in 0..range.num_pages {
+ let i_u64: u64 = i.into_safe_cast();
+ let vfn = Vfn::new(range.vfn_start.raw() + i_u64);
+ let result = walker.walk_to_pte_lookup_with_window(&mut window, vfn)?;
+
+ match result {
+ WalkResult::Mapped { pte_addr, .. } | WalkResult::Unmapped { pte_addr } => {
+ invalid_pte.write(&mut window, pte_addr)?;
+ }
+ WalkResult::PageTableMissing => {
+ continue;
+ }
+ }
+ }
+ drop(window);
+
+ mm.tlb().flush(self.pdb_addr)?;
+
+ // TODO: Internal page table pages (PDE, PTE pages) are still kept around.
+ // This is by design as repeated maps/unmaps will be fast. As a future TODO,
+ // we can add a reclaimer here to reclaim if VRAM is short. For now, the PT
+ // pages are dropped once the `Vmm` is dropped.
+
+ range._drop_guard.disarm(); // Unmap complete, Ok to drop MappedRange.
+ Ok(())
+ }
}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 19/23] gpu: nova-core: Add BAR1 aperture type and size constant
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (17 preceding siblings ...)
2026-03-11 0:40 ` [PATCH v9 18/23] gpu: nova-core: mm: Add multi-page mapping API " Joel Fernandes
@ 2026-03-11 0:40 ` Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 20/23] gpu: nova-core: mm: Add BAR1 user interface Joel Fernandes
` (3 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:40 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add BAR1_SIZE constant and Bar1 type alias for the 256MB BAR1 aperture.
These are prerequisites for BAR1 memory access functionality.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/driver.rs | 8 +++++++-
drivers/gpu/nova-core/gsp/commands.rs | 4 ++++
drivers/gpu/nova-core/gsp/fw/commands.rs | 8 ++++++++
3 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/driver.rs b/drivers/gpu/nova-core/driver.rs
index 84b0e1703150..b4311adf4cef 100644
--- a/drivers/gpu/nova-core/driver.rs
+++ b/drivers/gpu/nova-core/driver.rs
@@ -13,7 +13,10 @@
Vendor, //
},
prelude::*,
- sizes::SZ_16M,
+ sizes::{
+ SZ_16M,
+ SZ_256M, //
+ },
sync::{
atomic::{
Atomic,
@@ -37,6 +40,7 @@ pub(crate) struct NovaCore {
}
const BAR0_SIZE: usize = SZ_16M;
+pub(crate) const BAR1_SIZE: usize = SZ_256M;
// For now we only support Ampere which can use up to 47-bit DMA addresses.
//
@@ -47,6 +51,8 @@ pub(crate) struct NovaCore {
const GPU_DMA_BITS: u32 = 47;
pub(crate) type Bar0 = pci::Bar<BAR0_SIZE>;
+#[expect(dead_code)]
+pub(crate) type Bar1 = pci::Bar<BAR1_SIZE>;
kernel::pci_device_table!(
PCI_TABLE,
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index 18dd86a38d46..1901c8928ab8 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -190,6 +190,9 @@ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
/// The reply from the GSP to the [`GetGspStaticInfo`] command.
pub(crate) struct GetGspStaticInfoReply {
gpu_name: [u8; 64],
+ /// BAR1 Page Directory Entry base address.
+ #[expect(dead_code)]
+ pub(crate) bar1_pde_base: u64,
/// Usable FB (VRAM) region for driver memory allocation.
pub(crate) usable_fb_region: Range<u64>,
/// End of VRAM.
@@ -211,6 +214,7 @@ fn read(
Ok(GetGspStaticInfoReply {
gpu_name: msg.gpu_name_str(),
+ bar1_pde_base: msg.bar1_pde_base(),
usable_fb_region: base..base.saturating_add(size),
total_fb_end,
})
diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs b/drivers/gpu/nova-core/gsp/fw/commands.rs
index acaf92cd6735..75a3d602e6ce 100644
--- a/drivers/gpu/nova-core/gsp/fw/commands.rs
+++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
@@ -117,6 +117,14 @@ impl GspStaticConfigInfo {
self.0.gpuNameString
}
+ /// Returns the BAR1 Page Directory Entry base address.
+ ///
+ /// This is the root page table address for BAR1 virtual memory,
+ /// set up by GSP-RM firmware.
+ pub(crate) fn bar1_pde_base(&self) -> u64 {
+ self.0.bar1PdeBase
+ }
+
/// Extract the first usable FB region from GSP firmware data.
///
/// Returns the first region suitable for driver memory allocation as a `(base, size)` tuple.
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 20/23] gpu: nova-core: mm: Add BAR1 user interface
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (18 preceding siblings ...)
2026-03-11 0:40 ` [PATCH v9 19/23] gpu: nova-core: Add BAR1 aperture type and size constant Joel Fernandes
@ 2026-03-11 0:40 ` Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 21/23] gpu: nova-core: mm: Add BAR1 memory management self-tests Joel Fernandes
` (2 subsequent siblings)
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:40 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add the BAR1 user interface for CPU access to GPU video memory through
the BAR1 aperture.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/driver.rs | 1 -
drivers/gpu/nova-core/gpu.rs | 17 ++-
drivers/gpu/nova-core/gsp/commands.rs | 1 -
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/bar_user.rs | 156 ++++++++++++++++++++++++++
5 files changed, 173 insertions(+), 3 deletions(-)
create mode 100644 drivers/gpu/nova-core/mm/bar_user.rs
diff --git a/drivers/gpu/nova-core/driver.rs b/drivers/gpu/nova-core/driver.rs
index b4311adf4cef..3bc264a099de 100644
--- a/drivers/gpu/nova-core/driver.rs
+++ b/drivers/gpu/nova-core/driver.rs
@@ -51,7 +51,6 @@ pub(crate) struct NovaCore {
const GPU_DMA_BITS: u32 = 47;
pub(crate) type Bar0 = pci::Bar<BAR0_SIZE>;
-#[expect(dead_code)]
pub(crate) type Bar1 = pci::Bar<BAR1_SIZE>;
kernel::pci_device_table!(
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index 32266480bb0f..efff76313b89 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -24,7 +24,12 @@
commands::GetGspStaticInfoReply,
Gsp, //
},
- mm::GpuMm,
+ mm::{
+ bar_user::BarUser,
+ pagetable::MmuVersion,
+ GpuMm,
+ VramAddress, //
+ },
regs,
};
@@ -263,6 +268,8 @@ pub(crate) struct Gpu {
gsp: Gsp,
/// Static GPU information from GSP.
gsp_static_info: GetGspStaticInfoReply,
+ /// BAR1 user interface for CPU access to GPU virtual memory.
+ bar_user: BarUser,
}
impl Gpu {
@@ -320,6 +327,14 @@ pub(crate) fn new<'a>(
}, pramin_vram_region)?
},
+ // Create BAR1 user interface for CPU access to GPU virtual memory.
+ bar_user: {
+ let pdb_addr = VramAddress::new(gsp_static_info.bar1_pde_base);
+ let mmu_version = MmuVersion::from(spec.chipset.arch());
+ let bar1_size = pdev.resource_len(1)?;
+ BarUser::new(pdb_addr, mmu_version, bar1_size)?
+ },
+
bar: devres_bar,
})
}
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index 1901c8928ab8..bfbe7bb05755 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -191,7 +191,6 @@ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
pub(crate) struct GetGspStaticInfoReply {
gpu_name: [u8; 64],
/// BAR1 Page Directory Entry base address.
- #[expect(dead_code)]
pub(crate) bar1_pde_base: u64,
/// Usable FB (VRAM) region for driver memory allocation.
pub(crate) usable_fb_region: Range<u64>,
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index 6e58f597fadd..c053d4f3b26c 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -4,6 +4,7 @@
#![expect(dead_code)]
+pub(crate) mod bar_user;
pub(crate) mod pagetable;
pub(crate) mod pramin;
pub(crate) mod tlb;
diff --git a/drivers/gpu/nova-core/mm/bar_user.rs b/drivers/gpu/nova-core/mm/bar_user.rs
new file mode 100644
index 000000000000..0d083f3e72c2
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/bar_user.rs
@@ -0,0 +1,156 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! BAR1 user interface for CPU access to GPU virtual memory. Used for USERD
+//! for GPU work submission, and applications to access GPU buffers via mmap().
+
+use kernel::{
+ io::Io,
+ prelude::*, //
+};
+
+use crate::{
+ driver::Bar1,
+ mm::{
+ pagetable::MmuVersion,
+ vmm::{
+ MappedRange,
+ Vmm, //
+ },
+ GpuMm,
+ Pfn,
+ Vfn,
+ VirtualAddress,
+ VramAddress,
+ PAGE_SIZE, //
+ },
+ num::IntoSafeCast,
+};
+
+/// BAR1 user interface for virtual memory mappings.
+///
+/// Owns a VMM instance with virtual address tracking and provides
+/// BAR1-specific mapping and cleanup operations.
+pub(crate) struct BarUser {
+ vmm: Vmm,
+}
+
+impl BarUser {
+ /// Create a new [`BarUser`] with virtual address tracking.
+ pub(crate) fn new(
+ pdb_addr: VramAddress,
+ mmu_version: MmuVersion,
+ va_size: u64,
+ ) -> Result<Self> {
+ Ok(Self {
+ vmm: Vmm::new(pdb_addr, mmu_version, va_size)?,
+ })
+ }
+
+ /// Map physical pages to a contiguous BAR1 virtual range.
+ pub(crate) fn map<'a>(
+ &'a mut self,
+ mm: &'a GpuMm,
+ bar: &'a Bar1,
+ pfns: &[Pfn],
+ writable: bool,
+ ) -> Result<BarAccess<'a>> {
+ if pfns.is_empty() {
+ return Err(EINVAL);
+ }
+
+ let mapped = self.vmm.map_pages(mm, pfns, None, writable)?;
+
+ Ok(BarAccess {
+ vmm: &mut self.vmm,
+ mm,
+ bar,
+ mapped: Some(mapped),
+ })
+ }
+}
+
+/// Access object for a mapped BAR1 region.
+///
+/// Wraps a [`MappedRange`] and provides BAR1 access. When dropped,
+/// unmaps pages and releases the VA range (by passing the range to
+/// [`Vmm::unmap_pages()`], which consumes it).
+pub(crate) struct BarAccess<'a> {
+ vmm: &'a mut Vmm,
+ mm: &'a GpuMm,
+ bar: &'a Bar1,
+ /// Needs to be an `Option` so that we can `take()` it and call `Drop`
+ /// on it in [`Vmm::unmap_pages()`].
+ mapped: Option<MappedRange>,
+}
+
+impl<'a> BarAccess<'a> {
+ /// Returns the active mapping.
+ fn mapped(&self) -> &MappedRange {
+ // `mapped` is only `None` after `take()` in `Drop`; accessors are
+ // never called from within `Drop`, so unwrap() never panics.
+ self.mapped.as_ref().unwrap()
+ }
+
+ /// Get the base virtual address of this mapping.
+ pub(crate) fn base(&self) -> VirtualAddress {
+ VirtualAddress::from(self.mapped().vfn_start)
+ }
+
+ /// Get the total size of the mapped region in bytes.
+ pub(crate) fn size(&self) -> usize {
+ self.mapped().num_pages * PAGE_SIZE
+ }
+
+ /// Get the starting virtual frame number.
+ pub(crate) fn vfn_start(&self) -> Vfn {
+ self.mapped().vfn_start
+ }
+
+ /// Get the number of pages in this mapping.
+ pub(crate) fn num_pages(&self) -> usize {
+ self.mapped().num_pages
+ }
+
+ /// Translate an offset within this mapping to a BAR1 aperture offset.
+ fn bar_offset(&self, offset: usize) -> Result<usize> {
+ if offset >= self.size() {
+ return Err(EINVAL);
+ }
+
+ let base_vfn: usize = self.mapped().vfn_start.raw().into_safe_cast();
+ let base = base_vfn.checked_mul(PAGE_SIZE).ok_or(EOVERFLOW)?;
+ base.checked_add(offset).ok_or(EOVERFLOW)
+ }
+
+ // Fallible accessors with runtime bounds checking.
+
+ /// Read a 32-bit value at the given offset.
+ pub(crate) fn try_read32(&self, offset: usize) -> Result<u32> {
+ self.bar.try_read32(self.bar_offset(offset)?)
+ }
+
+ /// Write a 32-bit value at the given offset.
+ pub(crate) fn try_write32(&self, value: u32, offset: usize) -> Result {
+ self.bar.try_write32(value, self.bar_offset(offset)?)
+ }
+
+ /// Read a 64-bit value at the given offset.
+ pub(crate) fn try_read64(&self, offset: usize) -> Result<u64> {
+ self.bar.try_read64(self.bar_offset(offset)?)
+ }
+
+ /// Write a 64-bit value at the given offset.
+ pub(crate) fn try_write64(&self, value: u64, offset: usize) -> Result {
+ self.bar.try_write64(value, self.bar_offset(offset)?)
+ }
+}
+
+impl Drop for BarAccess<'_> {
+ fn drop(&mut self) {
+ if let Some(mapped) = self.mapped.take() {
+ if self.vmm.unmap_pages(self.mm, mapped).is_err() {
+ kernel::pr_warn_once!("BarAccess: unmap_pages failed.\n");
+ }
+ }
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 21/23] gpu: nova-core: mm: Add BAR1 memory management self-tests
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (19 preceding siblings ...)
2026-03-11 0:40 ` [PATCH v9 20/23] gpu: nova-core: mm: Add BAR1 user interface Joel Fernandes
@ 2026-03-11 0:40 ` Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 22/23] gpu: nova-core: mm: Add PRAMIN aperture self-tests Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 23/23] gpu: nova-core: Use runtime BAR1 size instead of hardcoded 256MB Joel Fernandes
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:40 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add self-tests for BAR1 access during driver probe when
CONFIG_NOVA_MM_SELFTESTS is enabled (default disabled). This results in
testing the Vmm, GPU buddy allocator and BAR1 region all of which should
function correctly for the tests to pass.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/Kconfig | 10 ++
drivers/gpu/nova-core/driver.rs | 2 +
drivers/gpu/nova-core/gpu.rs | 38 ++++
drivers/gpu/nova-core/mm/bar_user.rs | 256 +++++++++++++++++++++++++++
4 files changed, 306 insertions(+)
diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
index 6513007bf66f..35de55aabcfc 100644
--- a/drivers/gpu/nova-core/Kconfig
+++ b/drivers/gpu/nova-core/Kconfig
@@ -15,3 +15,13 @@ config NOVA_CORE
This driver is work in progress and may not be functional.
If M is selected, the module will be called nova_core.
+
+config NOVA_MM_SELFTESTS
+ bool "Memory management self-tests"
+ depends on NOVA_CORE
+ help
+ Enable self-tests for the memory management subsystem. When enabled,
+ tests are run during GPU probe to verify PRAMIN aperture access,
+ page table walking, and BAR1 virtual memory mapping functionality.
+
+ This is a testing option and is default-disabled.
diff --git a/drivers/gpu/nova-core/driver.rs b/drivers/gpu/nova-core/driver.rs
index 3bc264a099de..b1aafaff0cee 100644
--- a/drivers/gpu/nova-core/driver.rs
+++ b/drivers/gpu/nova-core/driver.rs
@@ -101,6 +101,8 @@ fn probe(pdev: &pci::Device<Core>, _info: &Self::IdInfo) -> impl PinInit<Self, E
Ok(try_pin_init!(Self {
gpu <- Gpu::new(pdev, bar.clone(), bar.access(pdev.as_ref())?),
+ // Run optional GPU selftests.
+ _: { gpu.run_selftests(pdev)? },
_reg <- auxiliary::Registration::new(
pdev.as_ref(),
c"nova-drm",
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index efff76313b89..022b156de0da 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -349,4 +349,42 @@ pub(crate) fn unbind(&self, dev: &device::Device<device::Core>) {
.inspect(|bar| self.sysmem_flush.unregister(bar))
.is_err());
}
+
+ /// Run selftests on the constructed [`Gpu`].
+ pub(crate) fn run_selftests(
+ mut self: Pin<&mut Self>,
+ pdev: &pci::Device<device::Bound>,
+ ) -> Result {
+ self.as_mut().run_mm_selftests(pdev)?;
+ Ok(())
+ }
+
+ #[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+ fn run_mm_selftests(self: Pin<&mut Self>, pdev: &pci::Device<device::Bound>) -> Result {
+ use crate::driver::BAR1_SIZE;
+
+ let mmu_version = MmuVersion::from(self.spec.chipset.arch());
+
+ // BAR1 self-tests.
+ let bar1 = Arc::pin_init(
+ pdev.iomap_region_sized::<BAR1_SIZE>(1, c"nova-core/bar1"),
+ GFP_KERNEL,
+ )?;
+ let bar1_access = bar1.access(pdev.as_ref())?;
+
+ crate::mm::bar_user::run_self_test(
+ pdev.as_ref(),
+ &self.mm,
+ bar1_access,
+ self.gsp_static_info.bar1_pde_base,
+ mmu_version,
+ )?;
+
+ Ok(())
+ }
+
+ #[cfg(not(CONFIG_NOVA_MM_SELFTESTS))]
+ fn run_mm_selftests(self: Pin<&mut Self>, _pdev: &pci::Device<device::Bound>) -> Result {
+ Ok(())
+ }
}
diff --git a/drivers/gpu/nova-core/mm/bar_user.rs b/drivers/gpu/nova-core/mm/bar_user.rs
index 0d083f3e72c2..d2a2e0ad097a 100644
--- a/drivers/gpu/nova-core/mm/bar_user.rs
+++ b/drivers/gpu/nova-core/mm/bar_user.rs
@@ -154,3 +154,259 @@ fn drop(&mut self) {
}
}
}
+
+/// Check if the PDB has valid, VRAM-backed page tables.
+///
+/// Returns `Err(ENOENT)` if page tables are missing or not in VRAM.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn check_valid_page_tables(mm: &GpuMm, pdb_addr: VramAddress) -> Result {
+ use crate::mm::pagetable::{
+ ver2::Pde,
+ AperturePde, //
+ };
+
+ let mut window = mm.pramin().window()?;
+ let pdb_entry_raw = window.try_read64(pdb_addr.raw())?;
+ let pdb_entry = Pde::new(pdb_entry_raw);
+
+ if !pdb_entry.is_valid() {
+ return Err(ENOENT);
+ }
+
+ if pdb_entry.aperture() != AperturePde::VideoMemory {
+ return Err(ENOENT);
+ }
+
+ Ok(())
+}
+
+/// Run MM subsystem self-tests during probe.
+///
+/// Tests page table infrastructure and `BAR1` MMIO access using the `BAR1`
+/// address space. Uses the `GpuMm`'s buddy allocator to allocate page tables
+/// and test pages as needed.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+pub(crate) fn run_self_test(
+ dev: &kernel::device::Device,
+ mm: &GpuMm,
+ bar1: &crate::driver::Bar1,
+ bar1_pdb: u64,
+ mmu_version: MmuVersion,
+) -> Result {
+ use crate::mm::{
+ vmm::Vmm,
+ PAGE_SIZE, //
+ };
+ use kernel::gpu::buddy::{GpuBuddyAllocFlags, GpuBuddyAllocMode};
+ use kernel::ptr::Alignment;
+ use kernel::sizes::{
+ SZ_16K,
+ SZ_32K,
+ SZ_4K,
+ SZ_64K, //
+ };
+
+ // Self-tests only support MMU v2 for now.
+ if mmu_version != MmuVersion::V2 {
+ dev_info!(
+ dev,
+ "MM: Skipping self-tests for MMU {:?} (only V2 supported)\n",
+ mmu_version
+ );
+ return Ok(());
+ }
+
+ // Test patterns.
+ const PATTERN_PRAMIN: u32 = 0xDEAD_BEEF;
+ const PATTERN_BAR1: u32 = 0xCAFE_BABE;
+
+ dev_info!(dev, "MM: Starting self-test...\n");
+
+ let pdb_addr = VramAddress::new(bar1_pdb);
+
+ // Check if initial page tables are in VRAM.
+ if check_valid_page_tables(mm, pdb_addr).is_err() {
+ dev_info!(dev, "MM: Self-test SKIPPED - no valid VRAM page tables\n");
+ return Ok(());
+ }
+
+ // Set up a test page from the buddy allocator.
+ let test_page_blocks = KBox::pin_init(
+ mm.buddy().alloc_blocks(
+ GpuBuddyAllocMode::Simple,
+ SZ_4K,
+ Alignment::new::<SZ_4K>(),
+ GpuBuddyAllocFlags::default(),
+ ),
+ GFP_KERNEL,
+ )?;
+ let test_vram_offset = test_page_blocks.iter().next().ok_or(ENOMEM)?.offset();
+ let test_vram = VramAddress::new(test_vram_offset);
+ let test_pfn = Pfn::from(test_vram);
+
+ // Create a VMM of size 64K to track virtual memory mappings.
+ let mut vmm = Vmm::new(pdb_addr, MmuVersion::V2, SZ_64K.into_safe_cast())?;
+
+ // Create a test mapping.
+ let mapped = vmm.map_pages(mm, &[test_pfn], None, true)?;
+ let test_vfn = mapped.vfn_start;
+
+ // Pre-compute test addresses for the PRAMIN to BAR1 read test.
+ let vfn_offset: usize = test_vfn.raw().into_safe_cast();
+ let bar1_base_offset = vfn_offset.checked_mul(PAGE_SIZE).ok_or(EOVERFLOW)?;
+ let bar1_read_offset: usize = bar1_base_offset + 0x100;
+ let vram_read_addr: usize = test_vram.raw() + 0x100;
+
+ // Test 1: Write via PRAMIN, read via BAR1.
+ {
+ let mut window = mm.pramin().window()?;
+ window.try_write32(vram_read_addr, PATTERN_PRAMIN)?;
+ }
+
+ // Read back via BAR1 aperture.
+ let bar1_value = bar1.try_read32(bar1_read_offset)?;
+
+ let test1_passed = if bar1_value == PATTERN_PRAMIN {
+ true
+ } else {
+ dev_err!(
+ dev,
+ "MM: Test 1 FAILED - Expected {:#010x}, got {:#010x}\n",
+ PATTERN_PRAMIN,
+ bar1_value
+ );
+ false
+ };
+
+ // Cleanup - invalidate PTE.
+ vmm.unmap_pages(mm, mapped)?;
+
+ // Test 2: Two-phase prepare/execute API.
+ let prepared = vmm.prepare_map(mm, 1, None)?;
+ let mapped2 = vmm.execute_map(mm, prepared, &[test_pfn], true)?;
+ let readback = vmm.read_mapping(mm, mapped2.vfn_start)?;
+ let test2_passed = if readback == Some(test_pfn) {
+ true
+ } else {
+ dev_err!(dev, "MM: Test 2 FAILED - Two-phase map readback mismatch\n");
+ false
+ };
+ vmm.unmap_pages(mm, mapped2)?;
+
+ // Test 3: Range-constrained allocation with a hole — exercises block.size()-driven
+ // BAR1 mapping. A 4K hole is punched at base+16K, then a single 32K allocation
+ // is requested within [base, base+36K). The buddy allocator must split around the
+ // hole, returning multiple blocks (expected: {16K, 4K, 8K, 4K} = 32K total).
+ // Each block is mapped into BAR1 and verified via PRAMIN read-back.
+ //
+ // Address layout (base = 0x10000):
+ // [ 16K ] [HOLE 4K] [4K] [ 8K ] [4K]
+ // 0x10000 0x14000 0x15000 0x16000 0x18000 0x19000
+ let range_base: u64 = SZ_64K.into_safe_cast();
+ let sz_4k: u64 = SZ_4K.into_safe_cast();
+ let sz_16k: u64 = SZ_16K.into_safe_cast();
+ let sz_32k_4k: u64 = (SZ_32K + SZ_4K).into_safe_cast();
+
+ // Punch a 4K hole at base+16K so the subsequent 32K allocation must split.
+ let _hole = KBox::pin_init(
+ mm.buddy().alloc_blocks(
+ GpuBuddyAllocMode::Range {
+ start: range_base + sz_16k,
+ end: range_base + sz_16k + sz_4k,
+ },
+ SZ_4K,
+ Alignment::new::<SZ_4K>(),
+ GpuBuddyAllocFlags::default(),
+ ),
+ GFP_KERNEL,
+ )?;
+
+ // Allocate 32K within [base, base+36K). The hole forces the allocator to return
+ // split blocks whose sizes are determined by buddy alignment.
+ let blocks = KBox::pin_init(
+ mm.buddy().alloc_blocks(
+ GpuBuddyAllocMode::Range {
+ start: range_base,
+ end: range_base + sz_32k_4k,
+ },
+ SZ_32K,
+ Alignment::new::<SZ_4K>(),
+ GpuBuddyAllocFlags::default(),
+ ),
+ GFP_KERNEL,
+ )?;
+
+ let mut test3_passed = true;
+ let mut total_size = 0usize;
+
+ for block in blocks.iter() {
+ total_size += block.size();
+
+ // Map all pages of this block.
+ let page_size: u64 = PAGE_SIZE.into_safe_cast();
+ let num_pages = block.size() / PAGE_SIZE;
+
+ let mut pfns = KVec::new();
+ for j in 0..num_pages {
+ let j_u64: u64 = j.into_safe_cast();
+ pfns.push(
+ Pfn::from(VramAddress::new(
+ block.offset() + j_u64.checked_mul(page_size).ok_or(EOVERFLOW)?,
+ )),
+ GFP_KERNEL,
+ )?;
+ }
+
+ let mapped = vmm.map_pages(mm, &pfns, None, true)?;
+ let bar1_base_vfn: usize = mapped.vfn_start.raw().into_safe_cast();
+ let bar1_base = bar1_base_vfn.checked_mul(PAGE_SIZE).ok_or(EOVERFLOW)?;
+
+ for j in 0..num_pages {
+ let page_bar1_off = bar1_base + j * PAGE_SIZE;
+ let j_u64: u64 = j.into_safe_cast();
+ let page_phys = block.offset()
+ + j_u64
+ .checked_mul(PAGE_SIZE.into_safe_cast())
+ .ok_or(EOVERFLOW)?;
+
+ bar1.try_write32(PATTERN_BAR1, page_bar1_off)?;
+
+ let pramin_val = {
+ let mut window = mm.pramin().window()?;
+ window.try_read32(page_phys.into_safe_cast())?
+ };
+
+ if pramin_val != PATTERN_BAR1 {
+ dev_err!(
+ dev,
+ "MM: Test 3 FAILED block offset {:#x} page {} (val={:#x})\n",
+ block.offset(),
+ j,
+ pramin_val
+ );
+ test3_passed = false;
+ }
+ }
+
+ vmm.unmap_pages(mm, mapped)?;
+ }
+
+ // Verify aggregate: all returned block sizes must sum to allocation size.
+ if total_size != SZ_32K {
+ dev_err!(
+ dev,
+ "MM: Test 3 FAILED - total size {} != expected {}\n",
+ total_size,
+ SZ_32K
+ );
+ test3_passed = false;
+ }
+
+ if test1_passed && test2_passed && test3_passed {
+ dev_info!(dev, "MM: All self-tests PASSED\n");
+ Ok(())
+ } else {
+ dev_err!(dev, "MM: Self-tests FAILED\n");
+ Err(EIO)
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 22/23] gpu: nova-core: mm: Add PRAMIN aperture self-tests
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (20 preceding siblings ...)
2026-03-11 0:40 ` [PATCH v9 21/23] gpu: nova-core: mm: Add BAR1 memory management self-tests Joel Fernandes
@ 2026-03-11 0:40 ` Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 23/23] gpu: nova-core: Use runtime BAR1 size instead of hardcoded 256MB Joel Fernandes
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:40 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
Add self-tests for the PRAMIN aperture mechanism to verify correct
operation during GPU probe. The tests validate various alignment
requirements and corner cases.
The tests are default disabled and behind CONFIG_NOVA_MM_SELFTESTS.
When enabled, tests run after GSP boot during probe.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gpu.rs | 3 +
drivers/gpu/nova-core/mm/pramin.rs | 209 +++++++++++++++++++++++++++++
2 files changed, 212 insertions(+)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index 022b156de0da..5f4199e41d16 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -365,6 +365,9 @@ fn run_mm_selftests(self: Pin<&mut Self>, pdev: &pci::Device<device::Bound>) ->
let mmu_version = MmuVersion::from(self.spec.chipset.arch());
+ // PRAMIN aperture self-tests.
+ crate::mm::pramin::run_self_test(pdev.as_ref(), self.mm.pramin(), self.spec.chipset)?;
+
// BAR1 self-tests.
let bar1 = Arc::pin_init(
pdev.iomap_region_sized::<BAR1_SIZE>(1, c"nova-core/bar1"),
diff --git a/drivers/gpu/nova-core/mm/pramin.rs b/drivers/gpu/nova-core/mm/pramin.rs
index 707794f49add..bf87eb84805f 100644
--- a/drivers/gpu/nova-core/mm/pramin.rs
+++ b/drivers/gpu/nova-core/mm/pramin.rs
@@ -195,6 +195,11 @@ pub(crate) fn new(
}))
}
+ /// Returns the valid VRAM region for this PRAMIN instance.
+ pub(crate) fn vram_region(&self) -> &Range<u64> {
+ &self.vram_region
+ }
+
/// Acquire exclusive PRAMIN access.
///
/// Returns a [`PraminWindow`] guard that provides VRAM read/write accessors.
@@ -291,3 +296,207 @@ fn compute_window(
define_pramin_write!(try_write32, u32);
define_pramin_write!(try_write64, u64);
}
+
+/// Offset within the VRAM region to use as the self-test area.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+const SELFTEST_REGION_OFFSET: usize = 0x1000;
+
+/// Test read/write at byte-aligned locations.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn test_byte_readwrite(
+ dev: &kernel::device::Device,
+ win: &mut PraminWindow<'_>,
+ base: usize,
+) -> Result {
+ for i in 0u8..4 {
+ let offset = base + 1 + usize::from(i);
+ let val = 0xA0 + i;
+ win.try_write8(offset, val)?;
+ let read_val = win.try_read8(offset)?;
+ if read_val != val {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: wrote {:#x}, read {:#x}\n",
+ offset,
+ val,
+ read_val
+ );
+ return Err(EIO);
+ }
+ }
+ Ok(())
+}
+
+/// Test writing a `u32` and reading back as individual `u8`s.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn test_u32_as_bytes(
+ dev: &kernel::device::Device,
+ win: &mut PraminWindow<'_>,
+ base: usize,
+) -> Result {
+ let offset = base + 0x10;
+ let val: u32 = 0xDEADBEEF;
+ win.try_write32(offset, val)?;
+
+ // Read back as individual bytes (little-endian: EF BE AD DE).
+ let expected_bytes: [u8; 4] = [0xEF, 0xBE, 0xAD, 0xDE];
+ for (i, &expected) in expected_bytes.iter().enumerate() {
+ let read_val = win.try_read8(offset + i)?;
+ if read_val != expected {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: expected {:#x}, read {:#x}\n",
+ offset + i,
+ expected,
+ read_val
+ );
+ return Err(EIO);
+ }
+ }
+ Ok(())
+}
+
+/// Test window repositioning across 1MB boundaries.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn test_window_reposition(
+ dev: &kernel::device::Device,
+ win: &mut PraminWindow<'_>,
+ base: usize,
+) -> Result {
+ let offset_a: usize = base;
+ let offset_b: usize = base + 0x200000; // base + 2MB (different 1MB region).
+ let val_a: u32 = 0x11111111;
+ let val_b: u32 = 0x22222222;
+
+ win.try_write32(offset_a, val_a)?;
+ win.try_write32(offset_b, val_b)?;
+
+ let read_b = win.try_read32(offset_b)?;
+ if read_b != val_b {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: expected {:#x}, read {:#x}\n",
+ offset_b,
+ val_b,
+ read_b
+ );
+ return Err(EIO);
+ }
+
+ let read_a = win.try_read32(offset_a)?;
+ if read_a != val_a {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: expected {:#x}, read {:#x}\n",
+ offset_a,
+ val_a,
+ read_a
+ );
+ return Err(EIO);
+ }
+ Ok(())
+}
+
+/// Test that offsets outside the VRAM region are rejected.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn test_invalid_offset(
+ dev: &kernel::device::Device,
+ win: &mut PraminWindow<'_>,
+ vram_end: u64,
+) -> Result {
+ let invalid_offset: usize = vram_end.into_safe_cast();
+ let result = win.try_read32(invalid_offset);
+ if result.is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - read at invalid offset {:#x} should have failed\n",
+ invalid_offset
+ );
+ return Err(EIO);
+ }
+ Ok(())
+}
+
+/// Test that misaligned multi-byte accesses are rejected.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn test_misaligned_access(
+ dev: &kernel::device::Device,
+ win: &mut PraminWindow<'_>,
+ base: usize,
+) -> Result {
+ // `u16` at odd offset (not 2-byte aligned).
+ let offset_u16 = base + 0x21;
+ if win.try_write16(offset_u16, 0xABCD).is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - misaligned u16 write at {:#x} should have failed\n",
+ offset_u16
+ );
+ return Err(EIO);
+ }
+
+ // `u32` at 2-byte-aligned (not 4-byte-aligned) offset.
+ let offset_u32 = base + 0x32;
+ if win.try_write32(offset_u32, 0x12345678).is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - misaligned u32 write at {:#x} should have failed\n",
+ offset_u32
+ );
+ return Err(EIO);
+ }
+
+ // `u64` read at 4-byte-aligned (not 8-byte-aligned) offset.
+ let offset_u64 = base + 0x44;
+ if win.try_read64(offset_u64).is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - misaligned u64 read at {:#x} should have failed\n",
+ offset_u64
+ );
+ return Err(EIO);
+ }
+ Ok(())
+}
+
+/// Run PRAMIN self-tests during boot if self-tests are enabled.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+pub(crate) fn run_self_test(
+ dev: &kernel::device::Device,
+ pramin: &Pramin,
+ chipset: crate::gpu::Chipset,
+) -> Result {
+ use crate::gpu::Architecture;
+
+ // PRAMIN uses NV_PBUS_BAR0_WINDOW which is only available on pre-Hopper GPUs.
+ // Hopper+ uses NV_XAL_EP_BAR0_WINDOW instead, requiring a separate HAL that
+ // has not been implemented yet.
+ if !matches!(
+ chipset.arch(),
+ Architecture::Turing | Architecture::Ampere | Architecture::Ada
+ ) {
+ dev_info!(
+ dev,
+ "PRAMIN: Skipping self-tests for {:?} (only pre-Hopper supported)\n",
+ chipset
+ );
+ return Ok(());
+ }
+
+ dev_info!(dev, "PRAMIN: Starting self-test...\n");
+
+ let vram_region = pramin.vram_region();
+ let base: usize = vram_region.start.into_safe_cast();
+ let base = base + SELFTEST_REGION_OFFSET;
+ let vram_end = vram_region.end;
+ let mut win = pramin.window()?;
+
+ test_byte_readwrite(dev, &mut win, base)?;
+ test_u32_as_bytes(dev, &mut win, base)?;
+ test_window_reposition(dev, &mut win, base)?;
+ test_invalid_offset(dev, &mut win, vram_end)?;
+ test_misaligned_access(dev, &mut win, base)?;
+
+ dev_info!(dev, "PRAMIN: All self-tests PASSED\n");
+ Ok(())
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH v9 23/23] gpu: nova-core: Use runtime BAR1 size instead of hardcoded 256MB
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
` (21 preceding siblings ...)
2026-03-11 0:40 ` [PATCH v9 22/23] gpu: nova-core: mm: Add PRAMIN aperture self-tests Joel Fernandes
@ 2026-03-11 0:40 ` Joel Fernandes
22 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-11 0:40 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, Joel Fernandes
From: Zhi Wang <zhiw@nvidia.com>
Remove the hardcoded BAR1_SIZE = SZ_256M constant. On GPUs like L40 the
BAR1 aperture is larger than 256MB; using a hardcoded size prevents large
BAR1 from working and mapping it would fail.
Signed-off-by: Zhi Wang <zhiw@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/driver.rs | 8 ++------
drivers/gpu/nova-core/gpu.rs | 7 +------
2 files changed, 3 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/nova-core/driver.rs b/drivers/gpu/nova-core/driver.rs
index b1aafaff0cee..6f95f8672158 100644
--- a/drivers/gpu/nova-core/driver.rs
+++ b/drivers/gpu/nova-core/driver.rs
@@ -13,10 +13,7 @@
Vendor, //
},
prelude::*,
- sizes::{
- SZ_16M,
- SZ_256M, //
- },
+ sizes::SZ_16M,
sync::{
atomic::{
Atomic,
@@ -40,7 +37,6 @@ pub(crate) struct NovaCore {
}
const BAR0_SIZE: usize = SZ_16M;
-pub(crate) const BAR1_SIZE: usize = SZ_256M;
// For now we only support Ampere which can use up to 47-bit DMA addresses.
//
@@ -51,7 +47,7 @@ pub(crate) struct NovaCore {
const GPU_DMA_BITS: u32 = 47;
pub(crate) type Bar0 = pci::Bar<BAR0_SIZE>;
-pub(crate) type Bar1 = pci::Bar<BAR1_SIZE>;
+pub(crate) type Bar1 = pci::Bar;
kernel::pci_device_table!(
PCI_TABLE,
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index 5f4199e41d16..4d4040d56aba 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -361,18 +361,13 @@ pub(crate) fn run_selftests(
#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
fn run_mm_selftests(self: Pin<&mut Self>, pdev: &pci::Device<device::Bound>) -> Result {
- use crate::driver::BAR1_SIZE;
-
let mmu_version = MmuVersion::from(self.spec.chipset.arch());
// PRAMIN aperture self-tests.
crate::mm::pramin::run_self_test(pdev.as_ref(), self.mm.pramin(), self.spec.chipset)?;
// BAR1 self-tests.
- let bar1 = Arc::pin_init(
- pdev.iomap_region_sized::<BAR1_SIZE>(1, c"nova-core/bar1"),
- GFP_KERNEL,
- )?;
+ let bar1 = Arc::pin_init(pdev.iomap_region(1, c"nova-core/bar1"), GFP_KERNEL)?;
let bar1_access = bar1.access(pdev.as_ref())?;
crate::mm::bar_user::run_self_test(
--
2.34.1
^ permalink raw reply related [flat|nested] 36+ messages in thread
* Re: [PATCH v9 01/23] gpu: nova-core: Select GPU_BUDDY for VRAM allocation
2026-03-11 0:39 ` [PATCH v9 01/23] gpu: nova-core: Select GPU_BUDDY for VRAM allocation Joel Fernandes
@ 2026-03-12 6:34 ` Eliot Courtney
2026-03-16 13:17 ` Alexandre Courbot
1 sibling, 0 replies; 36+ messages in thread
From: Eliot Courtney @ 2026-03-12 6:34 UTC (permalink / raw)
To: Joel Fernandes, linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, dri-devel
On Wed Mar 11, 2026 at 9:39 AM JST, Joel Fernandes wrote:
> nova-core will use the GPU buddy allocator for physical VRAM management.
> Enable it in Kconfig.
>
> Cc: Nikola Djukic <ndjukic@nvidia.com>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
> ---
Reviewed-by: Eliot Courtney <ecourtney@nvidia.com>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v9 02/23] gpu: nova-core: Kconfig: Sort select statements alphabetically
2026-03-11 0:39 ` [PATCH v9 02/23] gpu: nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
@ 2026-03-12 6:35 ` Eliot Courtney
2026-03-16 13:17 ` Alexandre Courbot
1 sibling, 0 replies; 36+ messages in thread
From: Eliot Courtney @ 2026-03-12 6:35 UTC (permalink / raw)
To: Joel Fernandes, linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, dri-devel
On Wed Mar 11, 2026 at 9:39 AM JST, Joel Fernandes wrote:
> Reorder the select statements in NOVA_CORE Kconfig to be in
> alphabetical order.
>
> Suggested-by: Danilo Krummrich <dakr@kernel.org>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
> ---
Reviewed-by: Eliot Courtney <ecourtney@nvidia.com>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v9 03/23] gpu: nova-core: gsp: Return GspStaticInfo from boot()
2026-03-11 0:39 ` [PATCH v9 03/23] gpu: nova-core: gsp: Return GspStaticInfo from boot() Joel Fernandes
@ 2026-03-12 6:37 ` Eliot Courtney
0 siblings, 0 replies; 36+ messages in thread
From: Eliot Courtney @ 2026-03-12 6:37 UTC (permalink / raw)
To: Joel Fernandes, linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, dri-devel
On Wed Mar 11, 2026 at 9:39 AM JST, Joel Fernandes wrote:
> Refactor the GSP boot function to return only the GspStaticInfo,
> removing the FbLayout from the return tuple.
I think the commit message may need updating - `boot` doesn't return
FbLayout. And it returns `GetGspStaticInfoReply`, not `GspStaticInfo`.
Other than that,
Reviewed-by: Eliot Courtney <ecourtney@nvidia.com>
>
> @@ -126,7 +129,8 @@ fn run_fwsec_frts(
> /// user-space, patching them with signatures, and building firmware-specific intricate data
> /// structures that the GSP will use at runtime.
> ///
> - /// Upon return, the GSP is up and running, and its runtime object given as return value.
> + /// Upon return, the GSP is up and running, and static GPU information is returned.
> + ///
> pub(crate) fn boot(
> mut self: Pin<&mut Self>,
> pdev: &pci::Device<device::Bound>,
> @@ -134,7 +138,7 @@ pub(crate) fn boot(
> chipset: Chipset,
> gsp_falcon: &Falcon<Gsp>,
> sec2_falcon: &Falcon<Sec2>,
> - ) -> Result {
> + ) -> Result<GetGspStaticInfoReply> {
> let dev = pdev.as_ref();
>
> let bios = Vbios::new(dev, bar)?;
> @@ -225,6 +229,6 @@ pub(crate) fn boot(
> Err(e) => dev_warn!(pdev, "GPU name unavailable: {:?}\n", e),
> }
>
> - Ok(())
> + Ok(info)
> }
> }
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v9 04/23] gpu: nova-core: gsp: Extract usable FB region from GSP
2026-03-11 0:39 ` [PATCH v9 04/23] gpu: nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
@ 2026-03-13 6:58 ` Eliot Courtney
2026-03-16 13:18 ` Alexandre Courbot
1 sibling, 0 replies; 36+ messages in thread
From: Eliot Courtney @ 2026-03-13 6:58 UTC (permalink / raw)
To: Joel Fernandes, linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Dave Airlie, Daniel Almeida, Koen Koning,
dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Alexandre Courbot, Andrea Righi, Andy Ritger, Zhi Wang,
Balbir Singh, Philipp Stanner, Elle Rhumsaa, alexeyi,
Eliot Courtney, joel, linux-doc, amd-gfx, intel-gfx, intel-xe,
linux-fbdev, dri-devel
On Wed Mar 11, 2026 at 9:39 AM JST, Joel Fernandes wrote:
> Add first_usable_fb_region() to GspStaticConfigInfo to extract the first
> usable FB region from GSP's fbRegionInfoParams. Usable regions are those
> that are not reserved or protected.
>
> The extracted region is stored in GetGspStaticInfoReply and exposed via
> usable_fb_region() API for use by the memory subsystem.
>
> Cc: Nikola Djukic <ndjukic@nvidia.com>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
> ---
> drivers/gpu/nova-core/gsp/commands.rs | 11 ++++++--
> drivers/gpu/nova-core/gsp/fw/commands.rs | 32 ++++++++++++++++++++++++
> 2 files changed, 41 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
> index 8f270eca33be..8d5780d9cace 100644
> --- a/drivers/gpu/nova-core/gsp/commands.rs
> +++ b/drivers/gpu/nova-core/gsp/commands.rs
> @@ -4,6 +4,7 @@
> array,
> convert::Infallible,
> ffi::FromBytesUntilNulError,
> + ops::Range,
> str::Utf8Error, //
> };
>
> @@ -186,22 +187,28 @@ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
> }
> }
>
> -/// The reply from the GSP to the [`GetGspInfo`] command.
> +/// The reply from the GSP to the [`GetGspStaticInfo`] command.
> pub(crate) struct GetGspStaticInfoReply {
> gpu_name: [u8; 64],
> + /// Usable FB (VRAM) region for driver memory allocation.
> + #[expect(dead_code)]
> + pub(crate) usable_fb_region: Range<u64>,
> }
>
> impl MessageFromGsp for GetGspStaticInfoReply {
> const FUNCTION: MsgFunction = MsgFunction::GetGspStaticInfo;
> type Message = GspStaticConfigInfo;
> - type InitError = Infallible;
> + type InitError = Error;
>
> fn read(
> msg: &Self::Message,
> _sbuffer: &mut SBufferIter<array::IntoIter<&[u8], 2>>,
> ) -> Result<Self, Self::InitError> {
> + let (base, size) = msg.first_usable_fb_region().ok_or(ENODEV)?;
> +
> Ok(GetGspStaticInfoReply {
> gpu_name: msg.gpu_name_str(),
> + usable_fb_region: base..base.saturating_add(size),
We already return a Result here, so why not use checked_add?:
`base..base.checked_add(size).ok_or(EOVERFLOW)?`
> })
> }
> }
> diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs b/drivers/gpu/nova-core/gsp/fw/commands.rs
> index 67f44421fcc3..cef86cab8a12 100644
> --- a/drivers/gpu/nova-core/gsp/fw/commands.rs
> +++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
> @@ -5,6 +5,7 @@
> use kernel::{device, pci};
>
> use crate::gsp::GSP_PAGE_SIZE;
> +use crate::num::IntoSafeCast;
>
> use super::bindings;
>
> @@ -115,6 +116,37 @@ impl GspStaticConfigInfo {
> pub(crate) fn gpu_name_str(&self) -> [u8; 64] {
> self.0.gpuNameString
> }
> +
> + /// Extract the first usable FB region from GSP firmware data.
> + ///
> + /// Returns the first region suitable for driver memory allocation as a `(base, size)` tuple.
> + /// Usable regions are those that:
> + /// - Are not reserved for firmware internal use.
> + /// - Are not protected (hardware-enforced access restrictions).
> + /// - Support compression (can use GPU memory compression for bandwidth).
> + /// - Support ISO (isochronous memory for display requiring guaranteed bandwidth).
Are the above conditions all required (AND) or any required (OR)?
Might be worth clarifying in the doc.
> + pub(crate) fn first_usable_fb_region(&self) -> Option<(u64, u64)> {
> + let fb_info = &self.0.fbRegionInfoParams;
> + for i in 0..fb_info.numFBRegions.into_safe_cast() {
> + if let Some(reg) = fb_info.fbRegion.get(i) {
> + // Skip malformed regions where limit < base.
Is it normal that it returns a bunch of broken regions?
> + if reg.limit < reg.base {
> + continue;
> + }
> +
> + // Filter: not reserved, not protected, supports compression and ISO.
> + if reg.reserved == 0
> + && reg.bProtected == 0
> + && reg.supportCompressed != 0
> + && reg.supportISO != 0
> + {
> + let size = reg.limit - reg.base + 1;
> + return Some((reg.base, size));
This is identifying a range, so how about returning Option<Range<u64>>
instead? It gets immediately converted into a range anyway.
> + }
> + }
> + }
> + None
> + }
> }
>
> // SAFETY: Padding is explicit and will not contain uninitialized data.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v9 01/23] gpu: nova-core: Select GPU_BUDDY for VRAM allocation
2026-03-11 0:39 ` [PATCH v9 01/23] gpu: nova-core: Select GPU_BUDDY for VRAM allocation Joel Fernandes
2026-03-12 6:34 ` Eliot Courtney
@ 2026-03-16 13:17 ` Alexandre Courbot
2026-03-16 16:28 ` Joel Fernandes
1 sibling, 1 reply; 36+ messages in thread
From: Alexandre Courbot @ 2026-03-16 13:17 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, Elle Rhumsaa, alexeyi, Eliot Courtney, joel,
linux-doc, amd-gfx, intel-gfx, intel-xe, linux-fbdev
On Wed Mar 11, 2026 at 9:39 AM JST, Joel Fernandes wrote:
> nova-core will use the GPU buddy allocator for physical VRAM management.
> Enable it in Kconfig.
>
> Cc: Nikola Djukic <ndjukic@nvidia.com>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
As I said in v8, let's squash this change with the first commit that
actually makes use of GPU_BUDDY.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v9 02/23] gpu: nova-core: Kconfig: Sort select statements alphabetically
2026-03-11 0:39 ` [PATCH v9 02/23] gpu: nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
2026-03-12 6:35 ` Eliot Courtney
@ 2026-03-16 13:17 ` Alexandre Courbot
2026-03-16 16:28 ` Joel Fernandes
1 sibling, 1 reply; 36+ messages in thread
From: Alexandre Courbot @ 2026-03-16 13:17 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, Elle Rhumsaa, alexeyi, Eliot Courtney, joel,
linux-doc, amd-gfx, intel-gfx, intel-xe, linux-fbdev
On Wed Mar 11, 2026 at 9:39 AM JST, Joel Fernandes wrote:
> Reorder the select statements in NOVA_CORE Kconfig to be in
> alphabetical order.
>
> Suggested-by: Danilo Krummrich <dakr@kernel.org>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
This one is already in drm-rust-next.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v9 04/23] gpu: nova-core: gsp: Extract usable FB region from GSP
2026-03-11 0:39 ` [PATCH v9 04/23] gpu: nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
2026-03-13 6:58 ` Eliot Courtney
@ 2026-03-16 13:18 ` Alexandre Courbot
2026-03-16 16:57 ` Joel Fernandes
1 sibling, 1 reply; 36+ messages in thread
From: Alexandre Courbot @ 2026-03-16 13:18 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, Elle Rhumsaa, alexeyi, Eliot Courtney, joel,
linux-doc, amd-gfx, intel-gfx, intel-xe, linux-fbdev
On Wed Mar 11, 2026 at 9:39 AM JST, Joel Fernandes wrote:
> Add first_usable_fb_region() to GspStaticConfigInfo to extract the first
> usable FB region from GSP's fbRegionInfoParams. Usable regions are those
> that are not reserved or protected.
>
> The extracted region is stored in GetGspStaticInfoReply and exposed via
> usable_fb_region() API for use by the memory subsystem.
>
> Cc: Nikola Djukic <ndjukic@nvidia.com>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
This doesn't take into account the feedback I gave in [1]. In
particular, a TODO to handle the remaining regions looks important to
me.
[1] https://lore.kernel.org/all/DGRGDFASWXB7.3NAK8RRTCV88P@nvidia.com/
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v9 05/23] gpu: nova-core: gsp: Expose total physical VRAM end from FB region info
2026-03-11 0:39 ` [PATCH v9 05/23] gpu: nova-core: gsp: Expose total physical VRAM end from FB region info Joel Fernandes
@ 2026-03-16 13:19 ` Alexandre Courbot
2026-03-16 17:00 ` Joel Fernandes
0 siblings, 1 reply; 36+ messages in thread
From: Alexandre Courbot @ 2026-03-16 13:19 UTC (permalink / raw)
To: Joel Fernandes
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Jonathan Corbet, Alex Deucher,
Christian König, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
Tvrtko Ursulin, Huang Rui, Matthew Auld, Matthew Brost,
Lucas De Marchi, Thomas Hellström, Helge Deller, Alex Gaynor,
Boqun Feng, John Hubbard, Alistair Popple, Timur Tabi, Edwin Peer,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, Elle Rhumsaa, alexeyi, Eliot Courtney, joel,
linux-doc, amd-gfx, intel-gfx, intel-xe, linux-fbdev
On Wed Mar 11, 2026 at 9:39 AM JST, Joel Fernandes wrote:
> Add `total_fb_end()` to `GspStaticConfigInfo` that computes the exclusive end
> address of the highest valid FB region covering both usable and GSP-reserved
> areas.
>
> This allows callers to know the full physical VRAM extent, not just the
> allocatable portion.
>
> Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
> ---
> drivers/gpu/nova-core/gsp/commands.rs | 6 ++++++
> drivers/gpu/nova-core/gsp/fw/commands.rs | 19 +++++++++++++++++++
> 2 files changed, 25 insertions(+)
>
> diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
> index 8d5780d9cace..389d215098c6 100644
> --- a/drivers/gpu/nova-core/gsp/commands.rs
> +++ b/drivers/gpu/nova-core/gsp/commands.rs
> @@ -193,6 +193,9 @@ pub(crate) struct GetGspStaticInfoReply {
> /// Usable FB (VRAM) region for driver memory allocation.
> #[expect(dead_code)]
> pub(crate) usable_fb_region: Range<u64>,
> + /// End of VRAM.
> + #[expect(dead_code)]
> + pub(crate) total_fb_end: u64,
> }
>
> impl MessageFromGsp for GetGspStaticInfoReply {
> @@ -206,9 +209,12 @@ fn read(
> ) -> Result<Self, Self::InitError> {
> let (base, size) = msg.first_usable_fb_region().ok_or(ENODEV)?;
>
> + let total_fb_end = msg.total_fb_end().ok_or(ENODEV)?;
> +
> Ok(GetGspStaticInfoReply {
> gpu_name: msg.gpu_name_str(),
> usable_fb_region: base..base.saturating_add(size),
> + total_fb_end,
> })
> }
> }
> diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs b/drivers/gpu/nova-core/gsp/fw/commands.rs
> index cef86cab8a12..acaf92cd6735 100644
> --- a/drivers/gpu/nova-core/gsp/fw/commands.rs
> +++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
> @@ -147,6 +147,25 @@ pub(crate) fn first_usable_fb_region(&self) -> Option<(u64, u64)> {
> }
> None
> }
> +
> + /// Compute the end of physical VRAM from all FB regions.
> + pub(crate) fn total_fb_end(&self) -> Option<u64> {
> + let fb_info = &self.0.fbRegionInfoParams;
> + let mut max_end: Option<u64> = None;
> + for i in 0..fb_info.numFBRegions.into_safe_cast() {
> + if let Some(reg) = fb_info.fbRegion.get(i) {
> + if reg.limit < reg.base {
> + continue;
> + }
This is basically a repeat of the code of the previous patch. Let's
implement an iterator over the FB memory regions (that filters out
invalid regions) that we can leverage in both places so we don't need to
repeat ourselves.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v9 02/23] gpu: nova-core: Kconfig: Sort select statements alphabetically
2026-03-16 13:17 ` Alexandre Courbot
@ 2026-03-16 16:28 ` Joel Fernandes
0 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-16 16:28 UTC (permalink / raw)
To: Alexandre Courbot
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Eliot Courtney, joel, linux-doc
On Mon, 16 Mar 2026, Alexandre Courbot wrote:
> This one is already in drm-rust-next.
I'll rebase. thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v9 01/23] gpu: nova-core: Select GPU_BUDDY for VRAM allocation
2026-03-16 13:17 ` Alexandre Courbot
@ 2026-03-16 16:28 ` Joel Fernandes
0 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-16 16:28 UTC (permalink / raw)
To: Alexandre Courbot
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Danilo Krummrich, Eliot Courtney, joel, linux-doc
On Mon, 16 Mar 2026, Alexandre Courbot wrote:
> As I said in v8, let's squash this change with the first commit that
> actually makes use of GPU_BUDDY.
Will do. The first commit actually using GPU_BUDDY is "Add GpuMm
centralized memory manager", so I'll squash this Kconfig select into
that commit in v10.
thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v9 04/23] gpu: nova-core: gsp: Extract usable FB region from GSP
2026-03-16 13:18 ` Alexandre Courbot
@ 2026-03-16 16:57 ` Joel Fernandes
0 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-16 16:57 UTC (permalink / raw)
To: Alexandre Courbot
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Eliot Courtney, joel, linux-doc
On Mon, 16 Mar 2026, Alexandre Courbot wrote:
> This doesn't take into account the feedback I gave in [1]. In
> particular, a TODO to handle the remaining regions looks important to
> me.
Sorry about missing the v8 feedback. Fixed in upcoming v10.
--
Joel Fernandes
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH v9 05/23] gpu: nova-core: gsp: Expose total physical VRAM end from FB region info
2026-03-16 13:19 ` Alexandre Courbot
@ 2026-03-16 17:00 ` Joel Fernandes
0 siblings, 0 replies; 36+ messages in thread
From: Joel Fernandes @ 2026-03-16 17:00 UTC (permalink / raw)
To: Alexandre Courbot
Cc: linux-kernel, Miguel Ojeda, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Dave Airlie, Daniel Almeida,
Koen Koning, dri-devel, nouveau, rust-for-linux, Nikola Djukic,
Eliot Courtney, joel, linux-doc
On Mon, 16 Mar 2026, Alexandre Courbot wrote:
> This is basically a repeat of the code of the previous patch. Let's
> implement an iterator over the FB memory regions (that filters out
> invalid regions) that we can leverage in both places so we don't need
> to repeat ourselves.
Good catch. Added `fb_regions()` to create an iterator which both this and
previous patch use. Nice simplification:
/// Compute the end of physical VRAM from all FB regions.
pub(crate) fn total_fb_end(&self) -> Option<u64> {
self.fb_regions().map(|reg| reg.limit.saturating_add(1)).max()
}
--
Joel Fernandes
^ permalink raw reply [flat|nested] 36+ messages in thread
end of thread, other threads:[~2026-03-16 17:01 UTC | newest]
Thread overview: 36+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-11 0:39 [PATCH v9 00/23] gpu: nova-core: Add memory management support Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 01/23] gpu: nova-core: Select GPU_BUDDY for VRAM allocation Joel Fernandes
2026-03-12 6:34 ` Eliot Courtney
2026-03-16 13:17 ` Alexandre Courbot
2026-03-16 16:28 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 02/23] gpu: nova-core: Kconfig: Sort select statements alphabetically Joel Fernandes
2026-03-12 6:35 ` Eliot Courtney
2026-03-16 13:17 ` Alexandre Courbot
2026-03-16 16:28 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 03/23] gpu: nova-core: gsp: Return GspStaticInfo from boot() Joel Fernandes
2026-03-12 6:37 ` Eliot Courtney
2026-03-11 0:39 ` [PATCH v9 04/23] gpu: nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
2026-03-13 6:58 ` Eliot Courtney
2026-03-16 13:18 ` Alexandre Courbot
2026-03-16 16:57 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 05/23] gpu: nova-core: gsp: Expose total physical VRAM end from FB region info Joel Fernandes
2026-03-16 13:19 ` Alexandre Courbot
2026-03-16 17:00 ` Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 06/23] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 07/23] docs: gpu: nova-core: Document the PRAMIN aperture mechanism Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 08/23] gpu: nova-core: mm: Add common memory management types Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 09/23] gpu: nova-core: mm: Add TLB flush support Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 10/23] gpu: nova-core: mm: Add GpuMm centralized memory manager Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 11/23] gpu: nova-core: mm: Add common types for all page table formats Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 12/23] gpu: nova-core: mm: Add MMU v2 page table types Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 13/23] gpu: nova-core: mm: Add MMU v3 " Joel Fernandes
2026-03-11 0:39 ` [PATCH v9 14/23] gpu: nova-core: mm: Add unified page table entry wrapper enums Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 15/23] gpu: nova-core: mm: Add page table walker for MMU v2/v3 Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 16/23] gpu: nova-core: mm: Add Virtual Memory Manager Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 17/23] gpu: nova-core: mm: Add virtual address range tracking to VMM Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 18/23] gpu: nova-core: mm: Add multi-page mapping API " Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 19/23] gpu: nova-core: Add BAR1 aperture type and size constant Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 20/23] gpu: nova-core: mm: Add BAR1 user interface Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 21/23] gpu: nova-core: mm: Add BAR1 memory management self-tests Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 22/23] gpu: nova-core: mm: Add PRAMIN aperture self-tests Joel Fernandes
2026-03-11 0:40 ` [PATCH v9 23/23] gpu: nova-core: Use runtime BAR1 size instead of hardcoded 256MB Joel Fernandes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox