* [PATCH v12 01/22] gpu: nova-core: gsp: Return GspStaticInfo from boot()
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 02/22] gpu: nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
` (20 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Refactor the GSP boot function to return GetGspStaticInfoReply.
This enables access required for memory management initialization to:
- bar1_pde_base: BAR1 page directory base.
- bar2_pde_base: BAR2 page directory base.
- usable memory regions in video memory.
Reviewed-by: Eliot Courtney <ecourtney@nvidia.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gpu.rs | 8 ++++++--
drivers/gpu/nova-core/gsp/boot.rs | 12 ++++++++----
2 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index 0f6fe9a1b955..f2a8915a1ff4 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -21,7 +21,10 @@
},
fb::SysmemFlush,
gfw,
- gsp::Gsp,
+ gsp::{
+ commands::GetGspStaticInfoReply,
+ Gsp, //
+ },
regs,
};
@@ -238,6 +241,7 @@ pub(crate) struct Gpu {
/// GSP runtime data. Temporarily an empty placeholder.
#[pin]
gsp: Gsp,
+ gsp_static_info: GetGspStaticInfoReply,
}
impl Gpu {
@@ -269,7 +273,7 @@ pub(crate) fn new<'a>(
gsp <- Gsp::new(pdev),
- _: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)? },
+ gsp_static_info: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)? },
bar: devres_bar,
})
diff --git a/drivers/gpu/nova-core/gsp/boot.rs b/drivers/gpu/nova-core/gsp/boot.rs
index 6f707b3d1a54..51593b16fc57 100644
--- a/drivers/gpu/nova-core/gsp/boot.rs
+++ b/drivers/gpu/nova-core/gsp/boot.rs
@@ -33,7 +33,10 @@
},
gpu::Chipset,
gsp::{
- commands,
+ commands::{
+ self,
+ GetGspStaticInfoReply, //
+ },
sequencer::{
GspSequencer,
GspSequencerParams, //
@@ -145,7 +148,7 @@ pub(crate) fn boot(
chipset: Chipset,
gsp_falcon: &Falcon<Gsp>,
sec2_falcon: &Falcon<Sec2>,
- ) -> Result {
+ ) -> Result<GetGspStaticInfoReply> {
let dev = pdev.as_ref();
let bios = Vbios::new(dev, bar)?;
@@ -229,12 +232,13 @@ pub(crate) fn boot(
commands::wait_gsp_init_done(&self.cmdq)?;
// Obtain and display basic GPU information.
- let info = commands::get_gsp_info(&self.cmdq, bar)?;
+ let info = commands::get_gsp_info(&self.cmdq, bar)
+ .inspect_err(|e| dev_err!(pdev, "Failed to obtain GSP static info ({:?})\n", e))?;
match info.gpu_name() {
Ok(name) => dev_info!(pdev, "GPU name: {}\n", name),
Err(e) => dev_warn!(pdev, "GPU name unavailable: {:?}\n", e),
}
- Ok(())
+ Ok(info)
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 02/22] gpu: nova-core: gsp: Extract usable FB region from GSP
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 01/22] gpu: nova-core: gsp: Return GspStaticInfo from boot() Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 03/22] gpu: nova-core: gsp: Expose total physical VRAM end from FB region info Joel Fernandes
` (19 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add first_usable_fb_region() to GspStaticConfigInfo to extract the first
usable FB region from GSP's fbRegionInfoParams. Usable regions are those
that are not reserved or protected.
The extracted region is stored in GetGspStaticInfoReply and exposed as
usable_fb_region field for use by the memory subsystem.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gsp/commands.rs | 11 ++++--
drivers/gpu/nova-core/gsp/fw/commands.rs | 45 +++++++++++++++++++++++-
2 files changed, 52 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index c89c7b57a751..d18abd8b5f04 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -4,6 +4,7 @@
array,
convert::Infallible,
ffi::FromBytesUntilNulError,
+ ops::Range,
str::Utf8Error, //
};
@@ -189,15 +190,18 @@ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
}
}
-/// The reply from the GSP to the [`GetGspInfo`] command.
+/// The reply from the GSP to the [`GetGspStaticInfo`] command.
pub(crate) struct GetGspStaticInfoReply {
gpu_name: [u8; 64],
+ /// Usable FB (VRAM) region for driver memory allocation.
+ #[expect(dead_code)]
+ pub(crate) usable_fb_region: Range<u64>,
}
impl MessageFromGsp for GetGspStaticInfoReply {
const FUNCTION: MsgFunction = MsgFunction::GetGspStaticInfo;
type Message = GspStaticConfigInfo;
- type InitError = Infallible;
+ type InitError = Error;
fn read(
msg: &Self::Message,
@@ -205,6 +209,7 @@ fn read(
) -> Result<Self, Self::InitError> {
Ok(GetGspStaticInfoReply {
gpu_name: msg.gpu_name_str(),
+ usable_fb_region: msg.first_usable_fb_region().ok_or(ENODEV)?,
})
}
}
@@ -233,7 +238,7 @@ pub(crate) fn gpu_name(&self) -> core::result::Result<&str, GpuNameError> {
}
}
-/// Send the [`GetGspInfo`] command and awaits for its reply.
+/// Send the [`GetGspStaticInfo`] command and awaits for its reply.
pub(crate) fn get_gsp_info(cmdq: &Cmdq, bar: &Bar0) -> Result<GetGspStaticInfoReply> {
cmdq.send_command(bar, GetGspStaticInfo)
}
diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs b/drivers/gpu/nova-core/gsp/fw/commands.rs
index db46276430be..a34d29280430 100644
--- a/drivers/gpu/nova-core/gsp/fw/commands.rs
+++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
@@ -1,5 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
+use core::ops::Range;
+
use kernel::{
device,
pci,
@@ -10,7 +12,10 @@
}, //
};
-use crate::gsp::GSP_PAGE_SIZE;
+use crate::{
+ gsp::GSP_PAGE_SIZE,
+ num::IntoSafeCast, //
+};
use super::bindings;
@@ -121,6 +126,44 @@ impl GspStaticConfigInfo {
pub(crate) fn gpu_name_str(&self) -> [u8; 64] {
self.0.gpuNameString
}
+
+ /// Returns an iterator over valid FB regions from GSP firmware data.
+ fn fb_regions(
+ &self,
+ ) -> impl Iterator<Item = &bindings::NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO> {
+ let fb_info = &self.0.fbRegionInfoParams;
+ fb_info
+ .fbRegion
+ .iter()
+ .take(fb_info.numFBRegions.into_safe_cast())
+ .filter(|reg| reg.limit >= reg.base)
+ }
+
+ /// Extracts the first usable FB region from GSP firmware data.
+ ///
+ /// Returns the first region suitable for driver memory allocation as a [`Range<u64>`].
+ /// Usable regions are those that satisfy all the following properties:
+ /// - Are not reserved for firmware internal use.
+ /// - Are not protected (hardware-enforced access restrictions).
+ /// - Support compression (can use GPU memory compression for bandwidth).
+ /// - Support ISO (isochronous memory for display requiring guaranteed bandwidth).
+ ///
+ /// TODO: Multiple discontinuous usable regions of RAM are possible in
+ /// special cases. We need to support it.
+ pub(crate) fn first_usable_fb_region(&self) -> Option<Range<u64>> {
+ self.fb_regions().find_map(|reg| {
+ // Filter: not reserved, not protected, supports compression and ISO.
+ if reg.reserved == 0
+ && reg.bProtected == 0
+ && reg.supportCompressed != 0
+ && reg.supportISO != 0
+ {
+ reg.limit.checked_add(1).map(|end| reg.base..end)
+ } else {
+ None
+ }
+ })
+ }
}
// SAFETY: Padding is explicit and will not contain uninitialized data.
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 03/22] gpu: nova-core: gsp: Expose total physical VRAM end from FB region info
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 01/22] gpu: nova-core: gsp: Return GspStaticInfo from boot() Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 02/22] gpu: nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 04/22] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM Joel Fernandes
` (18 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add `total_fb_end()` to `GspStaticConfigInfo` that computes the
exclusive end address of the highest valid FB region covering both
usable and GSP-reserved areas.
This allows callers to know the full physical VRAM extent, not just
the allocatable portion.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gpu.rs | 12 +++++++++++-
drivers/gpu/nova-core/gsp/commands.rs | 5 +++++
drivers/gpu/nova-core/gsp/fw/commands.rs | 7 +++++++
3 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index f2a8915a1ff4..675a0676f032 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -273,7 +273,17 @@ pub(crate) fn new<'a>(
gsp <- Gsp::new(pdev),
- gsp_static_info: { gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)? },
+ gsp_static_info: {
+ let info = gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)?;
+
+ dev_info!(
+ pdev.as_ref(),
+ "Total physical VRAM: {} MiB\n",
+ info.total_fb_end >> 20
+ );
+
+ info
+ },
bar: devres_bar,
})
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index d18abd8b5f04..40b4ef72169d 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -196,6 +196,8 @@ pub(crate) struct GetGspStaticInfoReply {
/// Usable FB (VRAM) region for driver memory allocation.
#[expect(dead_code)]
pub(crate) usable_fb_region: Range<u64>,
+ /// End of VRAM.
+ pub(crate) total_fb_end: u64,
}
impl MessageFromGsp for GetGspStaticInfoReply {
@@ -207,9 +209,12 @@ fn read(
msg: &Self::Message,
_sbuffer: &mut SBufferIter<array::IntoIter<&[u8], 2>>,
) -> Result<Self, Self::InitError> {
+ let total_fb_end = msg.total_fb_end().ok_or(ENODEV)?;
+
Ok(GetGspStaticInfoReply {
gpu_name: msg.gpu_name_str(),
usable_fb_region: msg.first_usable_fb_region().ok_or(ENODEV)?,
+ total_fb_end,
})
}
}
diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs b/drivers/gpu/nova-core/gsp/fw/commands.rs
index a34d29280430..8619f08a689d 100644
--- a/drivers/gpu/nova-core/gsp/fw/commands.rs
+++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
@@ -164,6 +164,13 @@ pub(crate) fn first_usable_fb_region(&self) -> Option<Range<u64>> {
}
})
}
+
+ /// Compute the end of physical VRAM from all FB regions.
+ pub(crate) fn total_fb_end(&self) -> Option<u64> {
+ self.fb_regions()
+ .filter_map(|reg| reg.limit.checked_add(1))
+ .max()
+ }
}
// SAFETY: Padding is explicit and will not contain uninitialized data.
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 04/22] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (2 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 03/22] gpu: nova-core: gsp: Expose total physical VRAM end from FB region info Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 05/22] docs: gpu: nova-core: Document the PRAMIN aperture mechanism Joel Fernandes
` (17 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
PRAMIN apertures are a crucial mechanism to direct read/write to VRAM.
Add support for the same.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 5 +
drivers/gpu/nova-core/mm/pramin.rs | 300 +++++++++++++++++++++++++++++
drivers/gpu/nova-core/nova_core.rs | 1 +
drivers/gpu/nova-core/regs.rs | 10 +
4 files changed, 316 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm.rs
create mode 100644 drivers/gpu/nova-core/mm/pramin.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
new file mode 100644
index 000000000000..7a5dd4220c67
--- /dev/null
+++ b/drivers/gpu/nova-core/mm.rs
@@ -0,0 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Memory management subsystems for nova-core.
+
+pub(crate) mod pramin;
diff --git a/drivers/gpu/nova-core/mm/pramin.rs b/drivers/gpu/nova-core/mm/pramin.rs
new file mode 100644
index 000000000000..57b560ae1e85
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pramin.rs
@@ -0,0 +1,300 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Direct VRAM access through the PRAMIN aperture.
+//!
+//! PRAMIN provides a 1MB sliding window into VRAM through BAR0, allowing the CPU to access
+//! video memory directly. Access is managed through a two-level API:
+//!
+//! - [`Pramin`]: The parent object that owns the BAR0 reference and synchronization lock.
+//! - [`PraminWindow`]: A guard object that holds exclusive PRAMIN access for its lifetime.
+//!
+//! The PRAMIN aperture is a 1MB region at a fixed offset from BAR0. The window base is
+//! controlled by an architecture-specific register and is 64KB aligned.
+//!
+//! # Examples
+//!
+//! ## Basic read/write
+//!
+//! ```no_run
+//! use crate::driver::Bar0;
+//! use crate::gpu::Chipset;
+//! use crate::mm::pramin;
+//! use kernel::device;
+//! use kernel::devres::Devres;
+//! use kernel::prelude::*;
+//! use kernel::sync::Arc;
+//!
+//! fn example(
+//! devres_bar: Arc<Devres<Bar0>>,
+//! dev: &device::Device<device::Bound>,
+//! chipset: Chipset,
+//! vram_region: core::ops::Range<u64>,
+//! ) -> Result<()> {
+//! let pramin = Arc::pin_init(
+//! pramin::Pramin::new(devres_bar, dev, chipset, vram_region)?,
+//! GFP_KERNEL,
+//! )?;
+//! let mut window = pramin.get_window(dev)?;
+//!
+//! // Write and read back.
+//! window.try_write32(0x100, 0xDEADBEEF)?;
+//! let val = window.try_read32(0x100)?;
+//! assert_eq!(val, 0xDEADBEEF);
+//!
+//! Ok(())
+//! }
+//! ```
+//!
+//! ## Auto-repositioning across VRAM regions
+//!
+//! ```no_run
+//! use crate::driver::Bar0;
+//! use crate::gpu::Chipset;
+//! use crate::mm::pramin;
+//! use kernel::device;
+//! use kernel::devres::Devres;
+//! use kernel::prelude::*;
+//! use kernel::sync::Arc;
+//!
+//! fn example(
+//! devres_bar: Arc<Devres<Bar0>>,
+//! dev: &device::Device<device::Bound>,
+//! chipset: Chipset,
+//! vram_region: core::ops::Range<u64>,
+//! ) -> Result<()> {
+//! let pramin = Arc::pin_init(
+//! pramin::Pramin::new(devres_bar, dev, chipset, vram_region)?,
+//! GFP_KERNEL,
+//! )?;
+//! let mut window = pramin.get_window(dev)?;
+//!
+//! // Access first 1MB region.
+//! window.try_write32(0x100, 0x11111111)?;
+//!
+//! // Access at 2MB - window auto-repositions.
+//! window.try_write32(0x200000, 0x22222222)?;
+//!
+//! // Back to first region - window repositions again.
+//! let val = window.try_read32(0x100)?;
+//! assert_eq!(val, 0x11111111);
+//!
+//! Ok(())
+//! }
+//! ```
+
+#![expect(unused)]
+
+use core::ops::Range;
+
+use crate::{
+ bounded_enum,
+ driver::Bar0,
+ num::IntoSafeCast,
+ regs, //
+};
+
+use kernel::{
+ devres::Devres,
+ io::Io,
+ new_mutex,
+ num::Bounded,
+ prelude::*,
+ revocable::RevocableGuard,
+ sizes::{
+ SZ_1M,
+ SZ_64K, //
+ },
+ sync::{
+ lock::mutex::MutexGuard,
+ Arc,
+ Mutex, //
+ },
+};
+
+bounded_enum! {
+ /// Target memory type for the BAR0 window register.
+ ///
+ /// Only VRAM is supported; Hopper+ GPUs do not support other targets.
+ #[derive(Debug)]
+ pub(crate) enum Bar0WindowTarget with TryFrom<Bounded<u32, 2>> {
+ /// Video RAM (GPU framebuffer memory).
+ Vram = 0,
+ }
+}
+
+/// PRAMIN aperture base offset in BAR0.
+const PRAMIN_BASE: usize = 0x700000;
+
+/// PRAMIN aperture size (1MB).
+const PRAMIN_SIZE: usize = SZ_1M;
+
+/// Generate a PRAMIN read accessor.
+macro_rules! define_pramin_read {
+ ($name:ident, $ty:ty) => {
+ #[doc = concat!("Read a `", stringify!($ty), "` from VRAM at the given offset.")]
+ pub(crate) fn $name(&mut self, vram_offset: usize) -> Result<$ty> {
+ let (bar_offset, new_base) =
+ self.compute_window(vram_offset, ::core::mem::size_of::<$ty>())?;
+
+ if let Some(base) = new_base {
+ Self::write_window_base(&self.bar, base)?;
+ *self.state = base;
+ }
+ self.bar.$name(bar_offset)
+ }
+ };
+}
+
+/// Generate a PRAMIN write accessor.
+macro_rules! define_pramin_write {
+ ($name:ident, $ty:ty) => {
+ #[doc = concat!("Write a `", stringify!($ty), "` to VRAM at the given offset.")]
+ pub(crate) fn $name(&mut self, vram_offset: usize, value: $ty) -> Result {
+ let (bar_offset, new_base) =
+ self.compute_window(vram_offset, ::core::mem::size_of::<$ty>())?;
+
+ if let Some(base) = new_base {
+ Self::write_window_base(&self.bar, base)?;
+ *self.state = base;
+ }
+ self.bar.$name(value, bar_offset)
+ }
+ };
+}
+
+/// PRAMIN aperture manager.
+///
+/// Call [`Pramin::get_window()`] to acquire exclusive PRAMIN access.
+#[pin_data]
+pub(crate) struct Pramin {
+ bar: Arc<Devres<Bar0>>,
+ /// Valid VRAM region. Accesses outside this range are rejected.
+ vram_region: Range<u64>,
+ /// PRAMIN aperture state, protected by a mutex.
+ ///
+ /// # Invariants
+ ///
+ /// This lock is acquired during the DMA fence signaling critical path.
+ /// It must NEVER be held across any reclaimable CPU memory / allocations
+ /// (`GFP_KERNEL`), because the memory reclaim path can call
+ /// `dma_fence_wait()`, which would deadlock with this lock held.
+ #[pin]
+ state: Mutex<u64>,
+}
+
+impl Pramin {
+ /// Create a pin-initializer for PRAMIN.
+ ///
+ /// `vram_region` specifies the valid VRAM address range.
+ pub(crate) fn new(
+ bar: Arc<Devres<Bar0>>,
+ vram_region: Range<u64>,
+ ) -> Result<impl PinInit<Self>> {
+ let bar_access = bar.try_access().ok_or(ENODEV)?;
+ let current_base = Self::read_window_base(&bar_access);
+
+ Ok(pin_init!(Self {
+ bar,
+ vram_region,
+ state <- new_mutex!(current_base, "pramin_state"),
+ }))
+ }
+
+ /// Acquire exclusive PRAMIN access.
+ ///
+ /// Returns a [`PraminWindow`] guard that provides VRAM read/write accessors.
+ /// The [`PraminWindow`] is exclusive and only one can exist at a time.
+ pub(crate) fn get_window(&self) -> Result<PraminWindow<'_>> {
+ let bar = self.bar.try_access().ok_or(ENODEV)?;
+ let state = self.state.lock();
+ Ok(PraminWindow {
+ bar,
+ vram_region: self.vram_region.clone(),
+ state,
+ })
+ }
+
+ /// Read the current window base from the BAR0_WINDOW register.
+ fn read_window_base(bar: &Bar0) -> u64 {
+ let reg = bar.read(regs::NV_PBUS_BAR0_WINDOW);
+
+ // TODO: Convert to Bounded<u64, 40> when available.
+ u64::from(reg.window_base()) << 16
+ }
+}
+
+/// PRAMIN window guard for direct VRAM access.
+///
+/// This guard holds exclusive access to the PRAMIN aperture. The window auto-repositions
+/// when accessing VRAM offsets outside the current 1MB range.
+///
+/// Only one [`PraminWindow`] can exist at a time per [`Pramin`] instance (enforced by the
+/// internal `MutexGuard`).
+pub(crate) struct PraminWindow<'a> {
+ bar: RevocableGuard<'a, Bar0>,
+ vram_region: Range<u64>,
+ state: MutexGuard<'a, u64>,
+}
+
+impl PraminWindow<'_> {
+ /// Write a new window base to the BAR0_WINDOW register.
+ fn write_window_base(bar: &Bar0, base: u64) -> Result {
+ // CAST: After >> 16, a VRAM address fits in u32.
+ let window_base = (base >> 16) as u32;
+ bar.write_reg(
+ regs::NV_PBUS_BAR0_WINDOW::zeroed()
+ .with_target(Bar0WindowTarget::Vram)
+ .try_with_window_base(window_base)?,
+ );
+ Ok(())
+ }
+
+ /// Compute window parameters for a VRAM access.
+ ///
+ /// Returns (`bar_offset`, `new_base`) where:
+ /// - `bar_offset`: The BAR0 offset to use for the access.
+ /// - `new_base`: `Some(base)` if window needs repositioning, `None` otherwise.
+ fn compute_window(
+ &self,
+ vram_offset: usize,
+ access_size: usize,
+ ) -> Result<(usize, Option<u64>)> {
+ // Validate VRAM offset is within the valid VRAM region.
+ let vram_addr = vram_offset as u64;
+ let end_addr = vram_addr.checked_add(access_size as u64).ok_or(EINVAL)?;
+ if vram_addr < self.vram_region.start || end_addr > self.vram_region.end {
+ return Err(EINVAL);
+ }
+
+ // Check if access fits within the current 1MB window.
+ let current_base = *self.state;
+ if vram_addr >= current_base {
+ let offset_in_window: usize = (vram_addr - current_base).into_safe_cast();
+ if offset_in_window + access_size <= PRAMIN_SIZE {
+ return Ok((PRAMIN_BASE + offset_in_window, None));
+ }
+ }
+
+ // Access doesn't fit in current window - reposition.
+ // Hardware requires 64KB alignment for the window base register.
+ let needed_base = vram_addr & !(SZ_64K as u64 - 1);
+ let offset_in_window: usize = (vram_addr - needed_base).into_safe_cast();
+
+ // Verify access fits in the 1MB window from the new base.
+ if offset_in_window + access_size > PRAMIN_SIZE {
+ return Err(EINVAL);
+ }
+
+ Ok((PRAMIN_BASE + offset_in_window, Some(needed_base)))
+ }
+
+ define_pramin_read!(try_read8, u8);
+ define_pramin_read!(try_read16, u16);
+ define_pramin_read!(try_read32, u32);
+ define_pramin_read!(try_read64, u64);
+
+ define_pramin_write!(try_write8, u8);
+ define_pramin_write!(try_write16, u16);
+ define_pramin_write!(try_write32, u32);
+ define_pramin_write!(try_write64, u64);
+}
diff --git a/drivers/gpu/nova-core/nova_core.rs b/drivers/gpu/nova-core/nova_core.rs
index 3a0c45481a92..d087354f03b9 100644
--- a/drivers/gpu/nova-core/nova_core.rs
+++ b/drivers/gpu/nova-core/nova_core.rs
@@ -17,6 +17,7 @@
mod gfw;
mod gpu;
mod gsp;
+mod mm;
#[macro_use]
mod num;
mod regs;
diff --git a/drivers/gpu/nova-core/regs.rs b/drivers/gpu/nova-core/regs.rs
index 2f171a4ff9ba..a3ca02345e20 100644
--- a/drivers/gpu/nova-core/regs.rs
+++ b/drivers/gpu/nova-core/regs.rs
@@ -30,6 +30,7 @@
Architecture,
Chipset, //
},
+ mm::pramin::Bar0WindowTarget,
num::FromSafeCast,
};
@@ -115,6 +116,15 @@ fn fmt(&self, f: &mut kernel::fmt::Formatter<'_>) -> kernel::fmt::Result {
}
}
+register! {
+ /// BAR0 window control for PRAMIN access.
+ pub(crate) NV_PBUS_BAR0_WINDOW(u32) @ 0x00001700 {
+ 25:24 target ?=> Bar0WindowTarget;
+ /// Window base address (bits 39:16 of FB addr).
+ 23:0 window_base;
+ }
+}
+
// PFB
register! {
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 05/22] docs: gpu: nova-core: Document the PRAMIN aperture mechanism
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (3 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 04/22] gpu: nova-core: mm: Add support to use PRAMIN windows to write to VRAM Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 06/22] gpu: nova-core: mm: Add common memory management types Joel Fernandes
` (16 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add documentation for the PRAMIN aperture mechanism used by nova-core
for direct VRAM access.
Nova only uses TARGET=VRAM for VRAM access. The SYS_MEM target values
are documented for completeness but not used by the driver.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
Documentation/gpu/nova/core/pramin.rst | 123 +++++++++++++++++++++++++
Documentation/gpu/nova/index.rst | 1 +
2 files changed, 124 insertions(+)
create mode 100644 Documentation/gpu/nova/core/pramin.rst
diff --git a/Documentation/gpu/nova/core/pramin.rst b/Documentation/gpu/nova/core/pramin.rst
new file mode 100644
index 000000000000..f6cbb0811163
--- /dev/null
+++ b/Documentation/gpu/nova/core/pramin.rst
@@ -0,0 +1,123 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================
+PRAMIN aperture mechanism
+=========================
+
+.. note::
+ The following description is approximate and current as of the Ampere family.
+ It may change for future generations and is intended to assist in understanding
+ the driver code.
+
+Introduction
+============
+
+PRAMIN is a hardware aperture mechanism that provides CPU access to GPU Video RAM (VRAM) before
+the GPU's Memory Management Unit (MMU) and page tables are initialized. This 1MB sliding window,
+located at a fixed offset within BAR0, is essential for setting up page tables and other critical
+GPU data structures without relying on the GPU's MMU.
+
+Architecture Overview
+=====================
+
+The PRAMIN aperture mechanism is logically implemented by the GPU's PBUS (PCIe Bus Controller Unit)
+and provides a CPU-accessible window into VRAM through the PCIe interface::
+
+ +-----------------+ PCIe +------------------------------+
+ | CPU |<----------->| GPU |
+ +-----------------+ | |
+ | +----------------------+ |
+ | | PBUS | |
+ | | (Bus Controller) | |
+ | | | |
+ | | +--------------+<------------ (window starts at
+ | | | PRAMIN | | | BAR0 + 0x700000)
+ | | | Window | | |
+ | | | (1MB) | | |
+ | | +--------------+ | |
+ | | | | |
+ | +---------|------------+ |
+ | | |
+ | v |
+ | +----------------------+<------------ (Program PRAMIN to any
+ | | VRAM | | 64KB-aligned VRAM boundary)
+ | | (Several GBs) | |
+ | | | |
+ | | FB[0x0000000000] | |
+ | | ... | |
+ | | FB[0xFFFFFFFFFF] | |
+ | +----------------------+ |
+ +------------------------------+
+
+PBUS (PCIe Bus Controller) is responsible for, among other things, handling MMIO
+accesses to the BAR registers.
+
+PRAMIN Window Operation
+=======================
+
+The PRAMIN window provides a 1MB sliding aperture that can be repositioned over
+the entire VRAM address space using the ``NV_PBUS_BAR0_WINDOW`` register.
+
+Window Control Mechanism
+-------------------------
+
+::
+
+ NV_PBUS_BAR0_WINDOW Register (0x1700):
+ +-------+--------+--------------------------------------+
+ | 31:26 | 25:24 | 23:0 |
+ | RSVD | TARGET | BASE_ADDR |
+ | | | (bits 39:16 of VRAM address) |
+ +-------+--------+--------------------------------------+
+
+ The 24-bit BASE_ADDR field encodes bits [39:16] of the target VRAM address,
+ providing 40-bit (1TB) address space coverage with 64KB alignment.
+
+ TARGET field (bits 25:24):
+ - 0x0: VRAM (Video Memory)
+ - 0x1: SYS_MEM_COH (Coherent System Memory)
+ - 0x2: SYS_MEM_NONCOH (Non-coherent System Memory)
+ - 0x3: Reserved
+
+.. note::
+ Nova only uses TARGET=VRAM (0x0) for video memory access. The SYS_MEM
+ target values are documented here for hardware completeness but are
+ not used by the driver.
+
+64KB Alignment Requirement
+---------------------------
+
+The PRAMIN window must be aligned to 64KB boundaries in VRAM. This is enforced
+by the ``BASE_ADDR`` field representing bits [39:16] of the target address::
+
+ VRAM Address Calculation:
+ actual_vram_addr = (BASE_ADDR << 16) + pramin_offset
+ Where:
+ - BASE_ADDR: 24-bit value from NV_PBUS_BAR0_WINDOW[23:0]
+ - pramin_offset: 20-bit offset within the PRAMIN window [0x00000-0xFFFFF]
+
+ Example Window Positioning:
+ +---------------------------------------------------------+
+ | VRAM Space |
+ | |
+ | 0x0000000000 +-----------------+ <-- 64KB aligned |
+ | | PRAMIN Window | |
+ | | (1MB) | |
+ | 0x00000FFFFF +-----------------+ |
+ | |
+ | | ^ |
+ | | | Window can slide |
+ | v | to any 64KB-aligned boundary |
+ | |
+ | 0x0123400000 +-----------------+ <-- 64KB aligned |
+ | | PRAMIN Window | |
+ | | (1MB) | |
+ | 0x01234FFFFF +-----------------+ |
+ | |
+ | ... |
+ | |
+ | 0xFFFFF00000 +-----------------+ <-- 64KB aligned |
+ | | PRAMIN Window | |
+ | | (1MB) | |
+ | 0xFFFFFFFFFF +-----------------+ |
+ +---------------------------------------------------------+
diff --git a/Documentation/gpu/nova/index.rst b/Documentation/gpu/nova/index.rst
index e39cb3163581..b8254b1ffe2a 100644
--- a/Documentation/gpu/nova/index.rst
+++ b/Documentation/gpu/nova/index.rst
@@ -32,3 +32,4 @@ vGPU manager VFIO driver and the nova-drm driver.
core/devinit
core/fwsec
core/falcon
+ core/pramin
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 06/22] gpu: nova-core: mm: Add common memory management types
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (4 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 05/22] docs: gpu: nova-core: Document the PRAMIN aperture mechanism Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 07/22] gpu: nova-core: mm: Add TLB flush support Joel Fernandes
` (15 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add foundational types for GPU memory management. These types are used
throughout the nova memory management subsystem for page table
operations, address translation, and memory allocation.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 193 ++++++++++++++++++++++++++++++++++++
1 file changed, 193 insertions(+)
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index 7a5dd4220c67..23e731bd7d22 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -2,4 +2,197 @@
//! Memory management subsystems for nova-core.
+#![expect(dead_code)]
+
+/// Implements `From` conversions between a frame-number type ([`Pfn`] or [`Vfn`])
+/// and `Bounded<u64, N>` for bitfield interop.
+///
+/// Each MMU version module should invoke this for the specific bit widths used by that version's
+/// PTE/PDE bitfield definitions.
+macro_rules! impl_frame_number_bounded {
+ ($type:ty, $bits:literal) => {
+ impl From<Bounded<u64, $bits>> for $type {
+ fn from(val: Bounded<u64, $bits>) -> Self {
+ Self::new(val.get())
+ }
+ }
+
+ impl From<$type> for Bounded<u64, $bits> {
+ fn from(v: $type) -> Self {
+ Bounded::from_expr(v.raw() & ::kernel::bits::genmask_u64(0..=($bits - 1)))
+ }
+ }
+ };
+}
+
+/// Implements `From` conversions between [`Pfn`] and `Bounded<u64, N>` for bitfield interop.
+macro_rules! impl_pfn_bounded {
+ ($bits:literal) => {
+ impl_frame_number_bounded!(Pfn, $bits);
+ };
+}
+
pub(crate) mod pramin;
+
+use kernel::{
+ bitfield,
+ num::Bounded,
+ prelude::*,
+ sizes::SZ_4K, //
+};
+
+use crate::num::u64_as_usize;
+
+/// Page size in bytes (4 KiB).
+pub(crate) const PAGE_SIZE: usize = SZ_4K;
+
+bitfield! {
+ /// Physical VRAM address in GPU video memory.
+ pub(crate) struct VramAddress(u64) {
+ /// Offset within 4KB page.
+ 11:0 offset;
+ /// Physical frame number.
+ 63:12 frame_number => Pfn;
+ }
+}
+
+impl VramAddress {
+ /// Create a new VRAM address from a raw value.
+ pub(crate) const fn new(addr: u64) -> Self {
+ Self::from_raw(addr)
+ }
+
+ /// Get the raw address value as `usize` (useful for MMIO offsets).
+ pub(crate) const fn raw(&self) -> usize {
+ u64_as_usize(self.into_raw())
+ }
+
+ /// Get the raw address value as `u64`.
+ pub(crate) const fn raw_u64(&self) -> u64 {
+ self.into_raw()
+ }
+}
+
+impl PartialOrd for VramAddress {
+ fn partial_cmp(&self, other: &Self) -> Option<core::cmp::Ordering> {
+ Some(self.cmp(other))
+ }
+}
+
+impl Ord for VramAddress {
+ fn cmp(&self, other: &Self) -> core::cmp::Ordering {
+ self.into_raw().cmp(&other.into_raw())
+ }
+}
+
+impl From<Pfn> for VramAddress {
+ fn from(pfn: Pfn) -> Self {
+ Self::zeroed().with_frame_number(pfn)
+ }
+}
+
+bitfield! {
+ /// Virtual address in GPU address space.
+ pub(crate) struct VirtualAddress(u64) {
+ /// Offset within 4KB page.
+ 11:0 offset;
+ /// Virtual frame number.
+ 63:12 frame_number => Vfn;
+ }
+}
+
+impl VirtualAddress {
+ /// Create a new virtual address from a raw value.
+ pub(crate) const fn new(addr: u64) -> Self {
+ Self::from_raw(addr)
+ }
+
+ /// Get the raw address value as `u64`.
+ pub(crate) const fn raw_u64(&self) -> u64 {
+ self.into_raw()
+ }
+}
+
+impl From<Vfn> for VirtualAddress {
+ fn from(vfn: Vfn) -> Self {
+ Self::zeroed().with_frame_number(vfn)
+ }
+}
+
+/// Physical Frame Number.
+///
+/// Represents a physical page in VRAM.
+#[repr(transparent)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(crate) struct Pfn(u64);
+
+impl Pfn {
+ /// Create a new PFN from a frame number.
+ pub(crate) const fn new(frame_number: u64) -> Self {
+ Self(frame_number)
+ }
+
+ /// Get the raw frame number.
+ pub(crate) const fn raw(self) -> u64 {
+ self.0
+ }
+}
+
+impl From<VramAddress> for Pfn {
+ fn from(addr: VramAddress) -> Self {
+ addr.frame_number()
+ }
+}
+
+impl From<u64> for Pfn {
+ fn from(val: u64) -> Self {
+ Self(val)
+ }
+}
+
+impl From<Pfn> for u64 {
+ fn from(pfn: Pfn) -> Self {
+ pfn.0
+ }
+}
+
+impl_pfn_bounded!(52);
+
+/// Virtual Frame Number.
+///
+/// Represents a virtual page in GPU address space.
+#[repr(transparent)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(crate) struct Vfn(u64);
+
+impl Vfn {
+ /// Create a new VFN from a frame number.
+ pub(crate) const fn new(frame_number: u64) -> Self {
+ Self(frame_number)
+ }
+
+ /// Get the raw frame number.
+ pub(crate) const fn raw(self) -> u64 {
+ self.0
+ }
+}
+
+impl From<VirtualAddress> for Vfn {
+ fn from(addr: VirtualAddress) -> Self {
+ addr.frame_number()
+ }
+}
+
+impl From<u64> for Vfn {
+ fn from(val: u64) -> Self {
+ Self(val)
+ }
+}
+
+impl From<Vfn> for u64 {
+ fn from(vfn: Vfn) -> Self {
+ vfn.0
+ }
+}
+
+impl_frame_number_bounded!(Vfn, 52);
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 07/22] gpu: nova-core: mm: Add TLB flush support
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (5 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 06/22] gpu: nova-core: mm: Add common memory management types Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 08/22] gpu: nova-core: mm: Add GpuMm centralized memory manager Joel Fernandes
` (14 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add TLB (Translation Lookaside Buffer) flush support for GPU MMU.
After modifying page table entries, the GPU's TLB must be invalidated
to ensure the new mappings take effect. The Tlb struct provides flush
functionality through BAR0 registers.
The flush operation writes the page directory base address and triggers
an invalidation, polling for completion with a 2 second timeout.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/tlb.rs | 130 ++++++++++++++++++++++++++++++++
drivers/gpu/nova-core/regs.rs | 69 ++++++++++++++++-
3 files changed, 199 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/nova-core/mm/tlb.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index 23e731bd7d22..2a8b228b1916 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -33,6 +33,7 @@ macro_rules! impl_pfn_bounded {
}
pub(crate) mod pramin;
+pub(super) mod tlb;
use kernel::{
bitfield,
diff --git a/drivers/gpu/nova-core/mm/tlb.rs b/drivers/gpu/nova-core/mm/tlb.rs
new file mode 100644
index 000000000000..53c6fe6084b8
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/tlb.rs
@@ -0,0 +1,130 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! TLB (Translation Lookaside Buffer) flush support for GPU MMU.
+//!
+//! After modifying page table entries, the GPU's TLB must be flushed to
+//! ensure the new mappings take effect. This module provides TLB flush
+//! functionality for virtual memory managers.
+//!
+//! # Examples
+//!
+//! ```ignore
+//! use crate::mm::tlb::Tlb;
+//!
+//! fn page_table_update(
+//! dev: &device::Device<device::Bound>,
+//! tlb: &Tlb,
+//! pdb_addr: VramAddress,
+//! ) -> Result<()> {
+//! // ... modify page tables ...
+//!
+//! // Flush TLB to make changes visible (polls for completion).
+//! tlb.flush(dev, pdb_addr)?;
+//!
+//! Ok(())
+//! }
+//! ```
+
+use kernel::{
+ device,
+ devres::Devres,
+ io::poll::read_poll_timeout,
+ io::Io,
+ new_mutex,
+ prelude::*,
+ sync::{
+ Arc,
+ Mutex, //
+ },
+ time::Delta, //
+};
+
+use crate::{
+ bounded_enum,
+ driver::Bar0,
+ mm::VramAddress,
+ regs, //
+};
+
+bounded_enum! {
+ /// TLB invalidation acknowledgment scope.
+ ///
+ /// Controls how far the hardware waits for the invalidation to propagate
+ /// before clearing the `trigger` bit of `NV_TLB_FLUSH_CTRL`.
+ #[derive(Debug, Copy, Clone, PartialEq, Eq)]
+ pub(crate) enum TlbAckMode with TryFrom<Bounded<u32, 2>> {
+ /// Fire-and-forget: no acknowledgment required.
+ None = 0,
+ /// Wait for acknowledgment from all consumers, including remote GPUs
+ /// reachable over NVLink.
+ ///
+ /// Globally is strictly required only during unmap or permission
+ /// tightening, because the backing memory may be reassigned after the
+ /// flush returns and a stale TLB entry could let the GPU access freed
+ /// memory. For new mapping or relaxing permissions, a stale entry would
+ /// merely cause a redundant fault and retry, so [`TlbAckMode::None`]
+ /// would suffice.
+ Globally = 1,
+ /// Wait for acknowledgment from consumers within the local NVLink
+ /// fabric node only; skip cross-node ack.
+ Intranode = 2,
+ }
+}
+
+/// TLB manager for GPU translation buffer operations.
+#[pin_data]
+pub(crate) struct Tlb {
+ bar: Arc<Devres<Bar0>>,
+ /// TLB flush serialization lock: This lock is designed to be acquired during
+ /// the DMA fence signalling critical path. It should NEVER be held across any
+ /// reclaimable CPU memory allocations because the memory reclaim path can
+ /// call `dma_fence_wait()` (when implemented), which would deadlock if lock held.
+ #[pin]
+ lock: Mutex<()>,
+}
+
+impl Tlb {
+ /// Create a new TLB manager.
+ pub(super) fn new(bar: Arc<Devres<Bar0>>) -> impl PinInit<Self> {
+ pin_init!(Self {
+ bar,
+ lock <- new_mutex!((), "tlb_flush"),
+ })
+ }
+
+ /// Flush the GPU TLB for a specific page directory base.
+ ///
+ /// This invalidates all TLB entries associated with the given PDB address.
+ /// Must be called after modifying page table entries to ensure the GPU sees
+ /// the updated mappings.
+ pub(super) fn flush(
+ &self,
+ dev: &device::Device<device::Bound>,
+ pdb_addr: VramAddress,
+ ) -> Result {
+ let _guard = self.lock.lock();
+ let bar = self.bar.access(dev)?;
+
+ // Write PDB address.
+ bar.write_reg(regs::NV_TLB_FLUSH_PDB_LO::from_pdb_addr(pdb_addr.raw_u64()));
+ bar.write_reg(regs::NV_TLB_FLUSH_PDB_HI::from_pdb_addr(pdb_addr.raw_u64()));
+
+ // Trigger flush.
+ bar.write_reg(
+ regs::NV_TLB_FLUSH_CTRL::zeroed()
+ .with_all_va(true)
+ .with_ack(TlbAckMode::None)
+ .with_trigger(true),
+ );
+
+ // Poll for completion.
+ read_poll_timeout(
+ || Ok(bar.read(regs::NV_TLB_FLUSH_CTRL)),
+ |ctrl: ®s::NV_TLB_FLUSH_CTRL| !ctrl.trigger(),
+ Delta::ZERO,
+ Delta::from_secs(2),
+ )?;
+
+ Ok(())
+ }
+}
diff --git a/drivers/gpu/nova-core/regs.rs b/drivers/gpu/nova-core/regs.rs
index a3ca02345e20..fcbd407fc494 100644
--- a/drivers/gpu/nova-core/regs.rs
+++ b/drivers/gpu/nova-core/regs.rs
@@ -30,7 +30,10 @@
Architecture,
Chipset, //
},
- mm::pramin::Bar0WindowTarget,
+ mm::{
+ pramin::Bar0WindowTarget,
+ tlb::TlbAckMode, //
+ },
num::FromSafeCast,
};
@@ -548,3 +551,67 @@ pub(crate) mod ga100 {
}
}
}
+
+// MMU TLB
+
+register! {
+ /// TLB flush register: PDB address bits lower bits.
+ pub(crate) NV_TLB_FLUSH_PDB_LO(u32) @ 0x00b830a0 {
+ /// PDB address bits [39:8].
+ 31:0 pdb_lo => u32;
+ }
+
+ /// TLB flush register: PDB address higher bits.
+ pub(crate) NV_TLB_FLUSH_PDB_HI(u32) @ 0x00b830a4 {
+ /// PDB address bits [47:40].
+ 7:0 pdb_hi => u8;
+ }
+
+ /// TLB flush control register.
+ pub(crate) NV_TLB_FLUSH_CTRL(u32) @ 0x00b830b0 {
+ /// Invalidate every VA in the PDB selected by `NV_TLB_FLUSH_PDB_LO/HI`.
+ 0:0 all_va => bool;
+ /// Invalidate TLBs for all PDBs (ignores `NV_TLB_FLUSH_PDB_LO/HI`).
+ 1:1 all_pdb => bool;
+ /// Restrict the flush to the HUB MMU's TLBs; skip broadcasting to the
+ /// per-GPC L2 TLBs.
+ ///
+ /// The GPU MMU has a two-level TLB hierarchy:
+ /// 1. The *HUB MMU* sits at the top and serves memory requests from
+ /// "host-side" engines: the host/channel interface, copy engines,
+ /// display, and BAR1/BAR2 accesses.
+ /// 2. Each GPC (Graphics Processing Cluster — the block that houses
+ /// shader cores / SMs) has its own L2 TLB that serves requests from
+ /// the compute and graphics engines inside the cluster.
+ ///
+ /// When set, only the HUB TLBs are invalidated. This is a performance
+ /// optimization for flushes that only affect HUB-side mappings (e.g.
+ /// BAR1/BAR2 windows), where fanning the invalidation out to every
+ /// GPC's L2 TLB would be wasted work. Must be false when flushing
+ /// mappings that may be cached by compute/graphics engines.
+ 2:2 hubtlb_only => bool;
+ /// Invalidation acknowledgment scope. See [`TlbAckMode`] for details.
+ 8:7 ack ?=> TlbAckMode;
+ /// Write 1 to kick off the flush. Hardware clears this bit when the
+ /// flush completes; reads as 1 while the flush is in progress.
+ 31:31 trigger => bool;
+ }
+}
+
+impl NV_TLB_FLUSH_PDB_LO {
+ /// Create a register value from a PDB address.
+ ///
+ /// Extracts bits [39:8] of the address and shifts it right by 8 bits.
+ pub(crate) fn from_pdb_addr(addr: u64) -> Self {
+ Self::zeroed().with_pdb_lo(((addr >> 8) & 0xFFFF_FFFF) as u32)
+ }
+}
+
+impl NV_TLB_FLUSH_PDB_HI {
+ /// Create a register value from a PDB address.
+ ///
+ /// Extracts bits [47:40] of the address and shifts it right by 40 bits.
+ pub(crate) fn from_pdb_addr(addr: u64) -> Self {
+ Self::zeroed().with_pdb_hi(((addr >> 40) & 0xFF) as u8)
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 08/22] gpu: nova-core: mm: Add GpuMm centralized memory manager
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (6 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 07/22] gpu: nova-core: mm: Add TLB flush support Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 09/22] gpu: nova-core: mm: Add common types for all page table formats Joel Fernandes
` (13 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Introduce GpuMm as the centralized GPU memory manager that owns:
- Buddy allocator for VRAM allocation.
- PRAMIN window for direct VRAM access.
- TLB manager for translation buffer operations.
This provides clean ownership model where GpuMm provides accessor
methods for its components that can be used for memory management
operations.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/Kconfig | 1 +
drivers/gpu/nova-core/gpu.rs | 28 +++++++++++
drivers/gpu/nova-core/gsp/commands.rs | 1 -
drivers/gpu/nova-core/mm.rs | 69 ++++++++++++++++++++++++++-
drivers/gpu/nova-core/mm/pramin.rs | 20 ++++----
5 files changed, 108 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
index a4f2380654e2..6513007bf66f 100644
--- a/drivers/gpu/nova-core/Kconfig
+++ b/drivers/gpu/nova-core/Kconfig
@@ -4,6 +4,7 @@ config NOVA_CORE
depends on PCI
depends on RUST
select AUXILIARY_BUS
+ select GPU_BUDDY
select RUST_FW_LOADER_ABSTRACTIONS
default n
help
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index 675a0676f032..fce05e9d1a89 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -4,10 +4,13 @@
device,
devres::Devres,
fmt,
+ gpu::buddy::GpuBuddyParams,
io::Io,
num::Bounded,
pci,
prelude::*,
+ ptr::Alignment,
+ sizes::SZ_4K,
sync::Arc, //
};
@@ -25,6 +28,7 @@
commands::GetGspStaticInfoReply,
Gsp, //
},
+ mm::GpuMm,
regs,
};
@@ -238,6 +242,9 @@ pub(crate) struct Gpu {
gsp_falcon: Falcon<GspFalcon>,
/// SEC2 falcon instance, used for GSP boot up and cleanup.
sec2_falcon: Falcon<Sec2Falcon>,
+ /// GPU memory manager owning memory management resources.
+ #[pin]
+ mm: GpuMm,
/// GSP runtime data. Temporarily an empty placeholder.
#[pin]
gsp: Gsp,
@@ -276,6 +283,13 @@ pub(crate) fn new<'a>(
gsp_static_info: {
let info = gsp.boot(pdev, bar, spec.chipset, gsp_falcon, sec2_falcon)?;
+ dev_info!(
+ pdev.as_ref(),
+ "Using FB region: {:#x}..{:#x}\n",
+ info.usable_fb_region.start,
+ info.usable_fb_region.end
+ );
+
dev_info!(
pdev.as_ref(),
"Total physical VRAM: {} MiB\n",
@@ -285,6 +299,20 @@ pub(crate) fn new<'a>(
info
},
+ // Create GPU memory manager owning memory management resources.
+ mm <- {
+ let usable_vram = &gsp_static_info.usable_fb_region;
+
+ // PRAMIN covers all physical VRAM (including GSP-reserved areas
+ // above the usable region, e.g. the BAR1 page directory).
+ let pramin_vram_region = 0..gsp_static_info.total_fb_end;
+ GpuMm::new(devres_bar.clone(), pdev.as_ref(), GpuBuddyParams {
+ base_offset: usable_vram.start,
+ size: usable_vram.end - usable_vram.start,
+ chunk_size: Alignment::new::<SZ_4K>(),
+ }, pramin_vram_region)?
+ },
+
bar: devres_bar,
})
}
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index 40b4ef72169d..eeecf81a0ffd 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -194,7 +194,6 @@ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
pub(crate) struct GetGspStaticInfoReply {
gpu_name: [u8; 64],
/// Usable FB (VRAM) region for driver memory allocation.
- #[expect(dead_code)]
pub(crate) usable_fb_region: Range<u64>,
/// End of VRAM.
pub(crate) total_fb_end: u64,
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index 2a8b228b1916..8b8a86980bb6 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -37,12 +37,77 @@ macro_rules! impl_pfn_bounded {
use kernel::{
bitfield,
+ device,
+ devres::Devres,
+ gpu::buddy::{
+ GpuBuddy,
+ GpuBuddyParams, //
+ },
num::Bounded,
prelude::*,
- sizes::SZ_4K, //
+ sizes::SZ_4K,
+ sync::Arc, //
};
-use crate::num::u64_as_usize;
+use crate::{
+ driver::Bar0,
+ num::u64_as_usize, //
+};
+
+pub(crate) use tlb::Tlb;
+
+/// GPU Memory Manager - owns all core MM components.
+///
+/// Provides centralized ownership of memory management resources:
+/// - [`GpuBuddy`] allocator for VRAM page table allocation.
+/// - [`pramin::Pramin`] for direct VRAM access.
+/// - [`Tlb`] manager for translation buffer flush operations.
+#[pin_data]
+pub(crate) struct GpuMm {
+ buddy: GpuBuddy,
+ #[pin]
+ pramin: pramin::Pramin,
+ #[pin]
+ tlb: Tlb,
+}
+
+impl GpuMm {
+ /// Create a pin-initializer for `GpuMm`.
+ ///
+ /// `pramin_vram_region` is the full physical VRAM range (including GSP-reserved
+ /// areas). PRAMIN window accesses are validated against this range.
+ pub(crate) fn new(
+ bar: Arc<Devres<Bar0>>,
+ dev: &device::Device<device::Bound>,
+ buddy_params: GpuBuddyParams,
+ pramin_vram_region: core::ops::Range<u64>,
+ ) -> Result<impl PinInit<Self>> {
+ let buddy = GpuBuddy::new(buddy_params)?;
+ let tlb_init = Tlb::new(bar.clone());
+ let pramin_init = pramin::Pramin::new(bar, dev, pramin_vram_region)?;
+
+ Ok(pin_init!(Self {
+ buddy,
+ pramin <- pramin_init,
+ tlb <- tlb_init,
+ }))
+ }
+
+ /// Access the [`GpuBuddy`] allocator.
+ pub(crate) fn buddy(&self) -> &GpuBuddy {
+ &self.buddy
+ }
+
+ /// Access the [`pramin::Pramin`].
+ pub(crate) fn pramin(&self) -> &pramin::Pramin {
+ &self.pramin
+ }
+
+ /// Access the [`Tlb`] manager.
+ pub(crate) fn tlb(&self) -> &Tlb {
+ &self.tlb
+ }
+}
/// Page size in bytes (4 KiB).
pub(crate) const PAGE_SIZE: usize = SZ_4K;
diff --git a/drivers/gpu/nova-core/mm/pramin.rs b/drivers/gpu/nova-core/mm/pramin.rs
index 57b560ae1e85..cc393f1509d7 100644
--- a/drivers/gpu/nova-core/mm/pramin.rs
+++ b/drivers/gpu/nova-core/mm/pramin.rs
@@ -94,12 +94,12 @@
};
use kernel::{
+ device,
devres::Devres,
io::Io,
new_mutex,
num::Bounded,
prelude::*,
- revocable::RevocableGuard,
sizes::{
SZ_1M,
SZ_64K, //
@@ -137,7 +137,7 @@ pub(crate) fn $name(&mut self, vram_offset: usize) -> Result<$ty> {
self.compute_window(vram_offset, ::core::mem::size_of::<$ty>())?;
if let Some(base) = new_base {
- Self::write_window_base(&self.bar, base)?;
+ Self::write_window_base(self.bar, base)?;
*self.state = base;
}
self.bar.$name(bar_offset)
@@ -154,7 +154,7 @@ pub(crate) fn $name(&mut self, vram_offset: usize, value: $ty) -> Result {
self.compute_window(vram_offset, ::core::mem::size_of::<$ty>())?;
if let Some(base) = new_base {
- Self::write_window_base(&self.bar, base)?;
+ Self::write_window_base(self.bar, base)?;
*self.state = base;
}
self.bar.$name(value, bar_offset)
@@ -188,10 +188,11 @@ impl Pramin {
/// `vram_region` specifies the valid VRAM address range.
pub(crate) fn new(
bar: Arc<Devres<Bar0>>,
+ dev: &device::Device<device::Bound>,
vram_region: Range<u64>,
) -> Result<impl PinInit<Self>> {
- let bar_access = bar.try_access().ok_or(ENODEV)?;
- let current_base = Self::read_window_base(&bar_access);
+ let bar_access = bar.access(dev)?;
+ let current_base = Self::read_window_base(bar_access);
Ok(pin_init!(Self {
bar,
@@ -204,8 +205,11 @@ pub(crate) fn new(
///
/// Returns a [`PraminWindow`] guard that provides VRAM read/write accessors.
/// The [`PraminWindow`] is exclusive and only one can exist at a time.
- pub(crate) fn get_window(&self) -> Result<PraminWindow<'_>> {
- let bar = self.bar.try_access().ok_or(ENODEV)?;
+ pub(crate) fn get_window<'a>(
+ &'a self,
+ dev: &'a device::Device<device::Bound>,
+ ) -> Result<PraminWindow<'a>> {
+ let bar = self.bar.access(dev)?;
let state = self.state.lock();
Ok(PraminWindow {
bar,
@@ -231,7 +235,7 @@ fn read_window_base(bar: &Bar0) -> u64 {
/// Only one [`PraminWindow`] can exist at a time per [`Pramin`] instance (enforced by the
/// internal `MutexGuard`).
pub(crate) struct PraminWindow<'a> {
- bar: RevocableGuard<'a, Bar0>,
+ bar: &'a Bar0,
vram_region: Range<u64>,
state: MutexGuard<'a, u64>,
}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 09/22] gpu: nova-core: mm: Add common types for all page table formats
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (7 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 08/22] gpu: nova-core: mm: Add GpuMm centralized memory manager Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 10/22] gpu: nova-core: mm: Add MMU v2 page table types Joel Fernandes
` (12 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add common page table types shared between MMU v2 and v3. These types
are hardware-agnostic and used by both MMU versions.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/pagetable.rs | 157 ++++++++++++++++++++++++++
2 files changed, 158 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/pagetable.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index 8b8a86980bb6..045e35c92b78 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -32,6 +32,7 @@ macro_rules! impl_pfn_bounded {
};
}
+pub(super) mod pagetable;
pub(crate) mod pramin;
pub(super) mod tlb;
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
new file mode 100644
index 000000000000..637ff43ea83a
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -0,0 +1,157 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Common page table types shared between MMU v2 and v3.
+//!
+//! This module provides foundational types used by both MMU versions:
+//! - Page table level hierarchy
+//! - Memory aperture types for PDEs and PTEs
+
+#![expect(dead_code)]
+
+use kernel::num::Bounded;
+
+use crate::gpu::Architecture;
+
+/// Extracts the page table index at a given level from a virtual address.
+pub(super) trait VaLevelIndex {
+ /// Return the page table index at `level` for this virtual address.
+ fn level_index(&self, level: u64) -> u64;
+}
+
+/// MMU version enumeration.
+#[derive(Debug, Clone, Copy, PartialEq, Eq)]
+pub(crate) enum MmuVersion {
+ /// MMU v2 for Turing/Ampere/Ada.
+ V2,
+ /// MMU v3 for Hopper and later.
+ V3,
+}
+
+impl From<Architecture> for MmuVersion {
+ fn from(arch: Architecture) -> Self {
+ match arch {
+ Architecture::Turing | Architecture::Ampere | Architecture::Ada => Self::V2,
+ // In the future, uncomment the following to support V3.
+ // _ => Self::V3,
+ }
+ }
+}
+
+/// Page Table Level hierarchy for MMU v2/v3.
+#[derive(Debug, Clone, Copy, PartialEq, Eq)]
+pub(super) enum PageTableLevel {
+ /// Level 0 - Page Directory Base (root).
+ Pdb,
+ /// Level 1 - Intermediate page directory.
+ L1,
+ /// Level 2 - Intermediate page directory.
+ L2,
+ /// Level 3 - Intermediate page directory or dual PDE (version-dependent).
+ L3,
+ /// Level 4 - PTE level for v2, intermediate page directory for v3.
+ L4,
+ /// Level 5 - PTE level used for MMU v3 only.
+ L5,
+}
+
+impl PageTableLevel {
+ /// Number of entries per page table (512 for 4KB pages).
+ pub(super) const ENTRIES_PER_TABLE: usize = 512;
+
+ /// Get the next level in the hierarchy.
+ pub(super) const fn next(&self) -> Option<PageTableLevel> {
+ match self {
+ Self::Pdb => Some(Self::L1),
+ Self::L1 => Some(Self::L2),
+ Self::L2 => Some(Self::L3),
+ Self::L3 => Some(Self::L4),
+ Self::L4 => Some(Self::L5),
+ Self::L5 => None,
+ }
+ }
+
+ /// Convert level to index.
+ pub(super) const fn as_index(&self) -> u64 {
+ match self {
+ Self::Pdb => 0,
+ Self::L1 => 1,
+ Self::L2 => 2,
+ Self::L3 => 3,
+ Self::L4 => 4,
+ Self::L5 => 5,
+ }
+ }
+}
+
+/// Memory aperture for Page Table Entries (`PTE`s).
+///
+/// Determines which memory region the `PTE` points to.
+#[repr(u8)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(super) enum AperturePte {
+ /// Local video memory (VRAM).
+ #[default]
+ VideoMemory = 0,
+ /// Peer GPU's video memory.
+ PeerMemory = 1,
+ /// System memory with cache coherence.
+ SystemCoherent = 2,
+ /// System memory without cache coherence.
+ SystemNonCoherent = 3,
+}
+
+// TODO[FPRI]: Replace with `#[derive(FromPrimitive)]` when available.
+impl From<Bounded<u64, 2>> for AperturePte {
+ fn from(val: Bounded<u64, 2>) -> Self {
+ match *val {
+ 0 => Self::VideoMemory,
+ 1 => Self::PeerMemory,
+ 2 => Self::SystemCoherent,
+ 3 => Self::SystemNonCoherent,
+ _ => Self::VideoMemory,
+ }
+ }
+}
+
+// TODO[FPRI]: Replace with `#[derive(ToPrimitive)]` when available.
+impl From<AperturePte> for Bounded<u64, 2> {
+ fn from(val: AperturePte) -> Self {
+ Bounded::from_expr(val as u64 & 0x3)
+ }
+}
+
+/// Memory aperture for Page Directory Entries (`PDE`s).
+///
+/// Note: For `PDE`s, `Invalid` (0) means the entry is not valid.
+#[repr(u8)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
+pub(super) enum AperturePde {
+ /// Invalid/unused entry.
+ #[default]
+ Invalid = 0,
+ /// Page table is in video memory.
+ VideoMemory = 1,
+ /// Page table is in system memory with coherence.
+ SystemCoherent = 2,
+ /// Page table is in system memory without coherence.
+ SystemNonCoherent = 3,
+}
+
+// TODO[FPRI]: Replace with `#[derive(FromPrimitive)]` when available.
+impl From<Bounded<u64, 2>> for AperturePde {
+ fn from(val: Bounded<u64, 2>) -> Self {
+ match *val {
+ 1 => Self::VideoMemory,
+ 2 => Self::SystemCoherent,
+ 3 => Self::SystemNonCoherent,
+ _ => Self::Invalid,
+ }
+ }
+}
+
+// TODO[FPRI]: Replace with `#[derive(ToPrimitive)]` when available.
+impl From<AperturePde> for Bounded<u64, 2> {
+ fn from(val: AperturePde) -> Self {
+ Bounded::from_expr(val as u64 & 0x3)
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 10/22] gpu: nova-core: mm: Add MMU v2 page table types
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (8 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 09/22] gpu: nova-core: mm: Add common types for all page table formats Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 11/22] gpu: nova-core: mm: Add MMU v3 " Joel Fernandes
` (11 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add page table entry and directory structures for MMU version 2
used by Turing/Ampere/Ada GPUs.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 2 +
drivers/gpu/nova-core/mm/pagetable/ver2.rs | 270 +++++++++++++++++++++
2 files changed, 272 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/pagetable/ver2.rs
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index 637ff43ea83a..f6b184c9b8c8 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -8,6 +8,8 @@
#![expect(dead_code)]
+pub(super) mod ver2;
+
use kernel::num::Bounded;
use crate::gpu::Architecture;
diff --git a/drivers/gpu/nova-core/mm/pagetable/ver2.rs b/drivers/gpu/nova-core/mm/pagetable/ver2.rs
new file mode 100644
index 000000000000..203dc5d9fdf6
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable/ver2.rs
@@ -0,0 +1,270 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! MMU v2 page table types for Turing, Ampere and Ada GPUs.
+//!
+//! This module defines MMU version 2 specific types (Turing, Ampere and Ada GPUs).
+//!
+//! Bit field layouts derived from the NVIDIA OpenRM documentation:
+//! `open-gpu-kernel-modules/src/common/inc/swref/published/turing/tu102/dev_mmu.h`
+
+#![expect(dead_code)]
+
+use kernel::bitfield;
+use kernel::num::Bounded;
+use pin_init::Zeroable;
+
+use super::{
+ AperturePde,
+ AperturePte,
+ PageTableLevel,
+ VaLevelIndex, //
+};
+use crate::mm::{
+ Pfn,
+ VirtualAddress,
+ VramAddress, //
+};
+
+// Bounded to version 2 Pfn bitfield conversions:
+// 25 bits for video memory frame numbers (bits 32:8).
+impl_pfn_bounded!(25);
+// 46 bits for system memory frame numbers (bits 53:8).
+impl_pfn_bounded!(46);
+
+bitfield! {
+ /// MMU v2 49-bit virtual address layout.
+ pub(super) struct VirtualAddressV2(u64) {
+ /// Page offset [11:0].
+ 11:0 offset;
+ /// PT index [20:12].
+ 20:12 pt_idx;
+ /// PDE0 index [28:21].
+ 28:21 pde0_idx;
+ /// PDE1 index [37:29].
+ 37:29 pde1_idx;
+ /// PDE2 index [46:38].
+ 46:38 pde2_idx;
+ /// PDE3 index [48:47].
+ 48:47 pde3_idx;
+ }
+}
+
+impl VirtualAddressV2 {
+ /// Create a [`VirtualAddressV2`] from a [`VirtualAddress`].
+ pub(super) fn new(va: VirtualAddress) -> Self {
+ Self::from_raw(va.raw_u64())
+ }
+}
+
+impl VaLevelIndex for VirtualAddressV2 {
+ fn level_index(&self, level: u64) -> u64 {
+ match level {
+ 0 => self.pde3_idx(),
+ 1 => self.pde2_idx(),
+ 2 => self.pde1_idx(),
+ 3 => self.pde0_idx(),
+ 4 => self.pt_idx(),
+ _ => 0,
+ }
+ }
+}
+
+/// `PDE` levels for MMU v2 (5-level hierarchy: `PDB` -> `L1` -> `L2` -> `L3` -> `L4`).
+pub(super) const PDE_LEVELS: &[PageTableLevel] = &[
+ PageTableLevel::Pdb,
+ PageTableLevel::L1,
+ PageTableLevel::L2,
+ PageTableLevel::L3,
+];
+
+/// `PTE` level for MMU v2.
+pub(super) const PTE_LEVEL: PageTableLevel = PageTableLevel::L4;
+
+/// Dual `PDE` level for MMU v2 (128-bit entries).
+pub(super) const DUAL_PDE_LEVEL: PageTableLevel = PageTableLevel::L3;
+
+// Page Table Entry (PTE) for MMU v2 - 64-bit entry at level 4.
+bitfield! {
+ /// Page Table Entry for MMU v2.
+ pub(in crate::mm) struct Pte(u64) {
+ /// Entry is valid.
+ 0:0 valid;
+ /// Memory aperture type.
+ 2:1 aperture => AperturePte;
+ /// Volatile (bypass L2 cache).
+ 3:3 volatile;
+ /// Encryption enabled (Confidential Computing).
+ 4:4 encrypted;
+ /// Privileged access only.
+ 5:5 privilege;
+ /// Write protection.
+ 6:6 read_only;
+ /// Atomic operations disabled.
+ 7:7 atomic_disable;
+ /// Frame number for system memory.
+ 53:8 frame_number_sys => Pfn;
+ /// Frame number for video memory.
+ 32:8 frame_number_vid => Pfn;
+ /// Peer GPU ID for peer memory (0-7).
+ 35:33 peer_id;
+ /// Compression tag line bits.
+ 53:36 comptagline;
+ /// Surface kind/format.
+ 63:56 kind;
+ }
+}
+
+impl Pte {
+ /// Create a `PTE` from a `u64` value.
+ pub(super) fn new(val: u64) -> Self {
+ Self::from_raw(val)
+ }
+
+ /// Create a valid `PTE` for video memory.
+ pub(super) fn new_vram(pfn: Pfn, writable: bool) -> Self {
+ Self::zeroed()
+ .with_valid(true)
+ .with_aperture(AperturePte::VideoMemory)
+ .with_frame_number_vid(pfn)
+ .with_read_only(!writable)
+ }
+
+ /// Create an invalid `PTE`.
+ pub(super) fn invalid() -> Self {
+ Self::zeroed()
+ }
+
+ /// Get the frame number based on aperture type.
+ pub(super) fn frame_number(&self) -> Pfn {
+ match self.aperture() {
+ AperturePte::VideoMemory => self.frame_number_vid(),
+ _ => self.frame_number_sys(),
+ }
+ }
+
+ /// Get the raw `u64` value.
+ pub(super) fn raw_u64(&self) -> u64 {
+ self.into_raw()
+ }
+}
+
+// Page Directory Entry (PDE) for MMU v2 - 64-bit entry at levels 0-2.
+bitfield! {
+ /// Page Directory Entry for MMU v2.
+ pub(in crate::mm) struct Pde(u64) {
+ /// Valid bit (inverted logic).
+ 0:0 valid_inverted;
+ /// Memory aperture type.
+ 2:1 aperture => AperturePde;
+ /// Volatile (bypass L2 cache).
+ 3:3 volatile;
+ /// Disable Address Translation Services.
+ 5:5 no_ats;
+ /// Table frame number for system memory.
+ 53:8 table_frame_sys => Pfn;
+ /// Table frame number for video memory.
+ 32:8 table_frame_vid => Pfn;
+ /// Peer GPU ID (0-7).
+ 35:33 peer_id;
+ }
+}
+
+impl Pde {
+ /// Create a `PDE` from a `u64` value.
+ pub(super) fn new(val: u64) -> Self {
+ Self::from_raw(val)
+ }
+
+ /// Create a valid `PDE` pointing to a page table in video memory.
+ pub(super) fn new_vram(table_pfn: Pfn) -> Self {
+ Self::zeroed()
+ .with_valid_inverted(false) // 0 = valid
+ .with_aperture(AperturePde::VideoMemory)
+ .with_table_frame_vid(table_pfn)
+ }
+
+ /// Create an invalid `PDE`.
+ pub(super) fn invalid() -> Self {
+ Self::zeroed()
+ .with_valid_inverted(true)
+ .with_aperture(AperturePde::Invalid)
+ }
+
+ /// Check if this `PDE` is valid.
+ pub(super) fn is_valid(&self) -> bool {
+ !self.valid_inverted().into_bool() && self.aperture() != AperturePde::Invalid
+ }
+
+ /// Get the table frame number based on aperture type.
+ fn table_frame(&self) -> Pfn {
+ match self.aperture() {
+ AperturePde::VideoMemory => self.table_frame_vid(),
+ _ => self.table_frame_sys(),
+ }
+ }
+
+ /// Get the `VRAM` address of the page table.
+ pub(super) fn table_vram_address(&self) -> VramAddress {
+ debug_assert!(
+ self.aperture() == AperturePde::VideoMemory,
+ "table_vram_address called on non-VRAM PDE (aperture: {:?})",
+ self.aperture()
+ );
+ VramAddress::from(self.table_frame_vid())
+ }
+
+ /// Get the raw `u64` value of the `PDE`.
+ pub(super) fn raw_u64(&self) -> u64 {
+ self.into_raw()
+ }
+}
+
+/// Dual `PDE` at Level 3 - 128-bit entry of Large/Small Page Table pointers.
+///
+/// The dual `PDE` supports both large (64KB) and small (4KB) page tables.
+#[repr(C)]
+#[derive(Debug, Clone, Copy)]
+pub(in crate::mm) struct DualPde {
+ /// Large/Big Page Table pointer (lower 64 bits).
+ pub(super) big: Pde,
+ /// Small Page Table pointer (upper 64 bits).
+ pub(super) small: Pde,
+}
+
+impl DualPde {
+ /// Create a dual `PDE` from raw 128-bit value (two `u64`s).
+ pub(super) fn new(big: u64, small: u64) -> Self {
+ Self {
+ big: Pde::new(big),
+ small: Pde::new(small),
+ }
+ }
+
+ /// Create a dual `PDE` with only the small page table pointer set.
+ ///
+ /// Note: The big (LPT) portion is set to 0, not `Pde::invalid()`.
+ /// According to hardware documentation, clearing bit 0 of the 128-bit
+ /// entry makes the PDE behave as a "normal" PDE. Using `Pde::invalid()`
+ /// would set bit 0 (valid_inverted), which breaks page table walking.
+ pub(super) fn new_small(table_pfn: Pfn) -> Self {
+ Self {
+ big: Pde::new(0),
+ small: Pde::new_vram(table_pfn),
+ }
+ }
+
+ /// Check if the small page table pointer is valid.
+ pub(super) fn has_small(&self) -> bool {
+ self.small.is_valid()
+ }
+
+ /// Check if the big page table pointer is valid.
+ fn has_big(&self) -> bool {
+ self.big.is_valid()
+ }
+
+ /// Get the small page table `Pfn`.
+ fn small_pfn(&self) -> Pfn {
+ self.small.table_frame()
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 11/22] gpu: nova-core: mm: Add MMU v3 page table types
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (9 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 10/22] gpu: nova-core: mm: Add MMU v2 page table types Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 12/22] gpu: nova-core: mm: Add page table entry operation traits Joel Fernandes
` (10 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add page table entry and directory structures for MMU version 3
used by Hopper and later GPUs.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 1 +
drivers/gpu/nova-core/mm/pagetable/ver2.rs | 10 +-
drivers/gpu/nova-core/mm/pagetable/ver3.rs | 391 +++++++++++++++++++++
3 files changed, 397 insertions(+), 5 deletions(-)
create mode 100644 drivers/gpu/nova-core/mm/pagetable/ver3.rs
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index f6b184c9b8c8..9897818b3b07 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -9,6 +9,7 @@
#![expect(dead_code)]
pub(super) mod ver2;
+pub(super) mod ver3;
use kernel::num::Bounded;
diff --git a/drivers/gpu/nova-core/mm/pagetable/ver2.rs b/drivers/gpu/nova-core/mm/pagetable/ver2.rs
index 203dc5d9fdf6..419ca0e11cd6 100644
--- a/drivers/gpu/nova-core/mm/pagetable/ver2.rs
+++ b/drivers/gpu/nova-core/mm/pagetable/ver2.rs
@@ -59,11 +59,11 @@ pub(super) fn new(va: VirtualAddress) -> Self {
impl VaLevelIndex for VirtualAddressV2 {
fn level_index(&self, level: u64) -> u64 {
match level {
- 0 => self.pde3_idx(),
- 1 => self.pde2_idx(),
- 2 => self.pde1_idx(),
- 3 => self.pde0_idx(),
- 4 => self.pt_idx(),
+ 0 => self.pde3_idx().get(),
+ 1 => self.pde2_idx().get(),
+ 2 => self.pde1_idx().get(),
+ 3 => self.pde0_idx().get(),
+ 4 => self.pt_idx().get(),
_ => 0,
}
}
diff --git a/drivers/gpu/nova-core/mm/pagetable/ver3.rs b/drivers/gpu/nova-core/mm/pagetable/ver3.rs
new file mode 100644
index 000000000000..2f9e762c4667
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable/ver3.rs
@@ -0,0 +1,391 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! MMU v3 page table types for Hopper and later GPUs.
+//!
+//! This module defines MMU version 3 specific types (Hopper and later GPUs).
+//!
+//! Key differences from MMU v2:
+//! - Unified 40-bit address field for all apertures (v2 had separate sys/vid fields).
+//! - PCF (Page Classification Field) replaces separate privilege/RO/atomic/cache bits.
+//! - KIND field is 4 bits (not 8).
+//! - IS_PTE bit in PDE to support large pages directly.
+//! - No COMPTAGLINE field (compression handled differently in v3).
+//! - No separate ENCRYPTED bit.
+//!
+//! Bit field layouts derived from the NVIDIA OpenRM documentation:
+//! `open-gpu-kernel-modules/src/common/inc/swref/published/hopper/gh100/dev_mmu.h`
+
+#![expect(dead_code)]
+
+use kernel::bitfield;
+use kernel::num::Bounded;
+use kernel::prelude::*;
+use pin_init::Zeroable;
+
+use super::{
+ AperturePde,
+ AperturePte,
+ PageTableLevel,
+ VaLevelIndex, //
+};
+use crate::mm::{
+ Pfn,
+ VirtualAddress,
+ VramAddress, //
+};
+
+// Bounded to version 3 Pfn conversion.
+impl_pfn_bounded!(40);
+
+bitfield! {
+ /// MMU v3 57-bit virtual address layout.
+ pub(super) struct VirtualAddressV3(u64) {
+ /// Page offset [11:0].
+ 11:0 offset;
+ /// PT index [20:12].
+ 20:12 pt_idx;
+ /// PDE0 index [28:21].
+ 28:21 pde0_idx;
+ /// PDE1 index [37:29].
+ 37:29 pde1_idx;
+ /// PDE2 index [46:38].
+ 46:38 pde2_idx;
+ /// PDE3 index [55:47].
+ 55:47 pde3_idx;
+ /// PDE4 index [56].
+ 56:56 pde4_idx;
+ }
+}
+
+impl VirtualAddressV3 {
+ /// Create a [`VirtualAddressV3`] from a [`VirtualAddress`].
+ pub(super) fn new(va: VirtualAddress) -> Self {
+ Self::from_raw(va.raw_u64())
+ }
+}
+
+impl VaLevelIndex for VirtualAddressV3 {
+ fn level_index(&self, level: u64) -> u64 {
+ match level {
+ 0 => self.pde4_idx().get(),
+ 1 => self.pde3_idx().get(),
+ 2 => self.pde2_idx().get(),
+ 3 => self.pde1_idx().get(),
+ 4 => self.pde0_idx().get(),
+ 5 => self.pt_idx().get(),
+ _ => 0,
+ }
+ }
+}
+
+/// PDE levels for MMU v3 (6-level hierarchy).
+pub(super) const PDE_LEVELS: &[PageTableLevel] = &[
+ PageTableLevel::Pdb,
+ PageTableLevel::L1,
+ PageTableLevel::L2,
+ PageTableLevel::L3,
+ PageTableLevel::L4,
+];
+
+/// PTE level for MMU v3.
+pub(super) const PTE_LEVEL: PageTableLevel = PageTableLevel::L5;
+
+/// Dual PDE level for MMU v3 (128-bit entries).
+pub(super) const DUAL_PDE_LEVEL: PageTableLevel = PageTableLevel::L4;
+
+bitfield! {
+ /// Page Classification Field for PTEs (5 bits) in MMU v3.
+ pub(in crate::mm) struct PtePcf(u8) {
+ /// Bypass L2 cache (0=cached, 1=bypass).
+ 0:0 uncached;
+ /// Access counting disabled (0=enabled, 1=disabled).
+ 1:1 acd;
+ /// Read-only access (0=read-write, 1=read-only).
+ 2:2 read_only;
+ /// Atomics disabled (0=enabled, 1=disabled).
+ 3:3 no_atomic;
+ /// Privileged access only (0=regular, 1=privileged).
+ 4:4 privileged;
+ }
+}
+
+impl PtePcf {
+ /// Create PCF for read-write mapping (cached, no atomics, regular mode).
+ fn rw() -> Self {
+ Self::zeroed().with_no_atomic(true)
+ }
+
+ /// Create PCF for read-only mapping (cached, no atomics, regular mode).
+ fn ro() -> Self {
+ Self::zeroed().with_read_only(true).with_no_atomic(true)
+ }
+
+ /// Get the raw `u8` value.
+ fn raw_u8(&self) -> u8 {
+ self.into_raw()
+ }
+}
+
+impl From<Bounded<u64, 5>> for PtePcf {
+ fn from(val: Bounded<u64, 5>) -> Self {
+ Self::from_raw(u8::from(val))
+ }
+}
+
+impl From<PtePcf> for Bounded<u64, 5> {
+ fn from(pcf: PtePcf) -> Self {
+ Bounded::from_expr(u64::from(pcf.into_raw()) & 0x1F)
+ }
+}
+
+bitfield! {
+ /// Page Classification Field for PDEs (3 bits) in MMU v3.
+ ///
+ /// Controls Address Translation Services (ATS) and caching.
+ pub(in crate::mm) struct PdePcf(u8) {
+ /// Bypass L2 cache (0=cached, 1=bypass).
+ 0:0 uncached;
+ /// ATS disabled (0=enabled, 1=disabled).
+ 1:1 no_ats;
+ }
+}
+
+impl PdePcf {
+ /// Create PCF for cached mapping with ATS enabled (default).
+ fn cached() -> Self {
+ Self::zeroed()
+ }
+
+ /// Get the raw `u8` value.
+ fn raw_u8(&self) -> u8 {
+ self.into_raw()
+ }
+}
+
+impl From<Bounded<u64, 3>> for PdePcf {
+ fn from(val: Bounded<u64, 3>) -> Self {
+ Self::from_raw(u8::from(val))
+ }
+}
+
+impl From<PdePcf> for Bounded<u64, 3> {
+ fn from(pcf: PdePcf) -> Self {
+ Bounded::from_expr(u64::from(pcf.into_raw()) & 0x7)
+ }
+}
+
+bitfield! {
+ /// Page Table Entry for MMU v3.
+ pub(in crate::mm) struct Pte(u64) {
+ /// Entry is valid.
+ 0:0 valid;
+ /// Memory aperture type.
+ 2:1 aperture => AperturePte;
+ /// Page Classification Field.
+ 7:3 pcf => PtePcf;
+ /// Surface kind (4 bits, 0x0=pitch, 0xF=invalid).
+ 11:8 kind;
+ /// Physical frame number (for all apertures).
+ 51:12 frame_number => Pfn;
+ /// Peer GPU ID for peer memory (0-7).
+ 63:61 peer_id;
+ }
+}
+
+impl Pte {
+ /// Create a PTE from a `u64` value.
+ pub(super) fn new(val: u64) -> Self {
+ Self::from_raw(val)
+ }
+
+ /// Create a valid PTE for video memory.
+ pub(super) fn new_vram(frame: Pfn, writable: bool) -> Self {
+ let pcf = if writable { PtePcf::rw() } else { PtePcf::ro() };
+ Self::zeroed()
+ .with_valid(true)
+ .with_aperture(AperturePte::VideoMemory)
+ .with_pcf(pcf)
+ .with_frame_number(frame)
+ }
+
+ /// Create an invalid PTE.
+ pub(super) fn invalid() -> Self {
+ Self::zeroed()
+ }
+
+ /// Get the raw `u64` value.
+ pub(super) fn raw_u64(&self) -> u64 {
+ self.into_raw()
+ }
+}
+
+bitfield! {
+ /// Page Directory Entry for MMU v3 (Hopper+).
+ ///
+ /// Note: v3 uses a unified 40-bit address field (v2 had separate sys/vid address fields).
+ pub(in crate::mm) struct Pde(u64) {
+ /// Entry is a PTE (0=PDE, 1=large page PTE).
+ 0:0 is_pte;
+ /// Memory aperture type.
+ 2:1 aperture => AperturePde;
+ /// Page Classification Field (3 bits for PDE).
+ 5:3 pcf => PdePcf;
+ /// Table frame number (40-bit unified address).
+ 51:12 table_frame => Pfn;
+ }
+}
+
+impl Pde {
+ /// Create a PDE from a `u64` value.
+ pub(super) fn new(val: u64) -> Self {
+ Self::from_raw(val)
+ }
+
+ /// Create a valid PDE pointing to a page table in video memory.
+ pub(super) fn new_vram(table_pfn: Pfn) -> Self {
+ Self::zeroed()
+ .with_is_pte(false)
+ .with_aperture(AperturePde::VideoMemory)
+ .with_table_frame(table_pfn)
+ }
+
+ /// Create an invalid PDE.
+ pub(super) fn invalid() -> Self {
+ Self::zeroed().with_aperture(AperturePde::Invalid)
+ }
+
+ /// Check if this PDE is valid.
+ pub(super) fn is_valid(&self) -> bool {
+ self.aperture() != AperturePde::Invalid
+ }
+
+ /// Get the VRAM address of the page table.
+ pub(super) fn table_vram_address(&self) -> VramAddress {
+ debug_assert!(
+ self.aperture() == AperturePde::VideoMemory,
+ "table_vram_address called on non-VRAM PDE (aperture: {:?})",
+ self.aperture()
+ );
+ VramAddress::from(self.table_frame())
+ }
+
+ /// Get the raw `u64` value.
+ pub(super) fn raw_u64(&self) -> u64 {
+ self.into_raw()
+ }
+}
+
+bitfield! {
+ /// Big Page Table pointer in Dual PDE (MMU v3).
+ ///
+ /// 64-bit lower word of the 128-bit Dual PDE.
+ pub(super) struct DualPdeBig(u64) {
+ /// Entry is a PTE (for large pages).
+ 0:0 is_pte;
+ /// Memory aperture type.
+ 2:1 aperture => AperturePde;
+ /// Page Classification Field.
+ 5:3 pcf => PdePcf;
+ /// Table frame (table address 256-byte aligned).
+ 51:8 table_frame;
+ }
+}
+
+impl DualPdeBig {
+ /// Create a big page table pointer from a `u64` value.
+ fn new(val: u64) -> Self {
+ Self::from_raw(val)
+ }
+
+ /// Create an invalid big page table pointer.
+ fn invalid() -> Self {
+ Self::zeroed().with_aperture(AperturePde::Invalid)
+ }
+
+ /// Create a valid big PDE pointing to a page table in video memory.
+ fn new_vram(table_addr: VramAddress) -> Result<Self> {
+ // Big page table addresses must be 256-byte aligned (shift 8).
+ if table_addr.raw_u64() & 0xFF != 0 {
+ return Err(EINVAL);
+ }
+
+ let table_frame = Bounded::from_expr(table_addr.raw_u64() >> 8);
+ Ok(Self::zeroed()
+ .with_is_pte(false)
+ .with_aperture(AperturePde::VideoMemory)
+ .with_table_frame(table_frame))
+ }
+
+ /// Check if this big PDE is valid.
+ fn is_valid(&self) -> bool {
+ self.aperture() != AperturePde::Invalid
+ }
+
+ /// Get the VRAM address of the big page table.
+ fn table_vram_address(&self) -> VramAddress {
+ debug_assert!(
+ self.aperture() == AperturePde::VideoMemory,
+ "table_vram_address called on non-VRAM DualPdeBig (aperture: {:?})",
+ self.aperture()
+ );
+ VramAddress::new(self.table_frame().get() << 8)
+ }
+
+ /// Get the raw `u64` value.
+ pub(super) fn raw_u64(&self) -> u64 {
+ self.into_raw()
+ }
+}
+
+/// Dual PDE at Level 4 for MMU v3 - 128-bit entry.
+///
+/// Contains both big (64KB) and small (4KB) page table pointers:
+/// - Lower 64 bits: Big Page Table pointer.
+/// - Upper 64 bits: Small Page Table pointer.
+///
+/// ## Note
+///
+/// The big and small page table pointers have different address layouts:
+/// - Big address = field value << 8 (256-byte alignment).
+/// - Small address = field value << 12 (4KB alignment).
+///
+/// This is why `DualPdeBig` is a separate type from `Pde`.
+#[repr(C)]
+#[derive(Debug, Clone, Copy)]
+pub(in crate::mm) struct DualPde {
+ /// Big Page Table pointer.
+ pub(super) big: DualPdeBig,
+ /// Small Page Table pointer.
+ pub(super) small: Pde,
+}
+
+// SAFETY: Both `DualPdeBig` and `Pde` fields are `Zeroable` (bitfield types are Zeroable).
+unsafe impl Zeroable for DualPde {}
+
+impl DualPde {
+ /// Create a dual PDE from raw 128-bit value (two `u64`s).
+ pub(super) fn new(big: u64, small: u64) -> Self {
+ Self {
+ big: DualPdeBig::new(big),
+ small: Pde::new(small),
+ }
+ }
+
+ /// Create a dual PDE with only the small page table pointer set.
+ pub(super) fn new_small(table_pfn: Pfn) -> Self {
+ Self {
+ big: DualPdeBig::invalid(),
+ small: Pde::new_vram(table_pfn),
+ }
+ }
+
+ /// Check if the small page table pointer is valid.
+ pub(super) fn has_small(&self) -> bool {
+ self.small.is_valid()
+ }
+
+ /// Check if the big page table pointer is valid.
+ fn has_big(&self) -> bool {
+ self.big.is_valid()
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 12/22] gpu: nova-core: mm: Add page table entry operation traits
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (10 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 11/22] gpu: nova-core: mm: Add MMU v3 " Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 13/22] gpu: nova-core: mm: Add page table walker for MMU v2/v3 Joel Fernandes
` (9 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Introduce trait-based abstractions for GPU page table entries: PteOps,
PdeOps, and DualPdeOps, along with the MmuConfig trait that ties them
together with version-specific constants.
Refactor the ver2 and ver3 page-table modules to implement these traits
and expose the shared entry/PDE/PTE operations uniformly.
Dispatch happens at compile time through the MmuV2 and MmuV3 marker
structs, so version-specific code is selected without runtime overhead
and without wrapper enums.
This enables version-agnostic page table operations while keeping
version-specific implementation details encapsulated in the ver2 and
ver3 modules.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 223 +++++++++++++++++++++
drivers/gpu/nova-core/mm/pagetable/ver2.rs | 150 ++++++++------
drivers/gpu/nova-core/mm/pagetable/ver3.rs | 120 +++++++----
3 files changed, 396 insertions(+), 97 deletions(-)
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index 9897818b3b07..764b9e71ae41 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -14,6 +14,13 @@
use kernel::num::Bounded;
use crate::gpu::Architecture;
+use crate::mm::{
+ pramin,
+ Pfn,
+ VirtualAddress,
+ VramAddress, //
+};
+use kernel::prelude::*;
/// Extracts the page table index at a given level from a virtual address.
pub(super) trait VaLevelIndex {
@@ -86,6 +93,222 @@ pub(super) const fn as_index(&self) -> u64 {
}
}
+// Trait abstractions for page table operations.
+
+/// Operations on Page Table Entries (`PTE`s).
+pub(super) trait PteOps: Copy + core::fmt::Debug {
+ /// Create a `PTE` from a raw `u64` value.
+ fn new(val: u64) -> Self;
+
+ /// Create an invalid `PTE`.
+ fn invalid() -> Self;
+
+ /// Create a valid `PTE` for video memory.
+ fn new_vram(pfn: Pfn, writable: bool) -> Self;
+
+ /// Check if this `PTE` is valid.
+ fn is_valid(&self) -> bool;
+
+ /// Get the physical frame number.
+ fn frame_number(&self) -> Pfn;
+
+ /// Get the raw `u64` value.
+ fn raw_u64(&self) -> u64;
+
+ /// Read a `PTE` from VRAM.
+ fn read(window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result<Self> {
+ let val = window.try_read64(addr.raw())?;
+ Ok(Self::new(val))
+ }
+
+ /// Write this `PTE` to VRAM.
+ fn write(&self, window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result {
+ window.try_write64(addr.raw(), self.raw_u64())
+ }
+}
+
+/// Operations on Page Directory Entries (`PDE`s).
+pub(super) trait PdeOps: Copy + core::fmt::Debug {
+ /// Create a `PDE` from a raw `u64` value.
+ fn new(val: u64) -> Self;
+
+ /// Create a valid `PDE` pointing to a page table in video memory.
+ fn new_vram(table_pfn: Pfn) -> Self;
+
+ /// Create an invalid `PDE`.
+ fn invalid() -> Self;
+
+ /// Check if this `PDE` is valid.
+ fn is_valid(&self) -> bool;
+
+ /// Get the memory aperture of this `PDE`.
+ fn aperture(&self) -> AperturePde;
+
+ /// Get the VRAM address of the page table.
+ fn table_vram_address(&self) -> VramAddress;
+
+ /// Get the raw `u64` value.
+ fn raw_u64(&self) -> u64;
+
+ /// Read a `PDE` from VRAM.
+ fn read(window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result<Self> {
+ let val = window.try_read64(addr.raw())?;
+ Ok(Self::new(val))
+ }
+
+ /// Write this `PDE` to VRAM.
+ fn write(&self, window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result {
+ window.try_write64(addr.raw(), self.raw_u64())
+ }
+
+ /// Check if this `PDE` is valid and points to video memory.
+ fn is_valid_vram(&self) -> bool {
+ self.is_valid() && self.aperture() == AperturePde::VideoMemory
+ }
+}
+
+/// Operations on Dual Page Directory Entries (128-bit `DualPde`s).
+pub(super) trait DualPdeOps: Copy + core::fmt::Debug {
+ /// Create a `DualPde` from raw 128-bit value (two `u64`s).
+ fn new(big: u64, small: u64) -> Self;
+
+ /// Create a `DualPde` with only the small page table pointer set.
+ fn new_small(table_pfn: Pfn) -> Self;
+
+ /// Check if the small page table pointer is valid.
+ fn has_small(&self) -> bool;
+
+ /// Get the small page table VRAM address.
+ fn small_vram_address(&self) -> VramAddress;
+
+ /// Get the raw `u64` value of the big PDE.
+ fn big_raw_u64(&self) -> u64;
+
+ /// Get the raw `u64` value of the small PDE.
+ fn small_raw_u64(&self) -> u64;
+
+ /// Read a dual PDE (128-bit) from VRAM.
+ fn read(window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result<Self> {
+ let lo = window.try_read64(addr.raw())?;
+ let hi = window.try_read64(addr.raw() + 8)?;
+ Ok(Self::new(lo, hi))
+ }
+
+ /// Write this dual PDE (128-bit) to VRAM.
+ fn write(&self, window: &mut pramin::PraminWindow<'_>, addr: VramAddress) -> Result {
+ window.try_write64(addr.raw(), self.big_raw_u64())?;
+ window.try_write64(addr.raw() + 8, self.small_raw_u64())
+ }
+}
+
+/// MMU configuration trait -- encodes version-specific constants and types.
+pub(super) trait MmuConfig: 'static {
+ /// Page Table Entry type.
+ type Pte: PteOps;
+ /// Page Directory Entry type.
+ type Pde: PdeOps;
+ /// Dual Page Directory Entry type (128-bit).
+ type DualPde: DualPdeOps;
+
+ /// PDE levels (excluding PTE level) for page table walking.
+ const PDE_LEVELS: &'static [PageTableLevel];
+ /// PTE level for this MMU version.
+ const PTE_LEVEL: PageTableLevel;
+ /// Dual PDE level (128-bit entries) for this MMU version.
+ const DUAL_PDE_LEVEL: PageTableLevel;
+
+ /// Get the number of entries per page table page for a given level.
+ fn entries_per_page(level: PageTableLevel) -> usize;
+
+ /// Extract the page table index at `level` from `va`.
+ fn level_index(va: VirtualAddress, level: u64) -> u64;
+
+ /// Get the entry size in bytes for a given level.
+ fn entry_size(level: PageTableLevel) -> usize {
+ if level == Self::DUAL_PDE_LEVEL {
+ 16 // 128-bit dual PDE
+ } else {
+ 8 // 64-bit PDE/PTE
+ }
+ }
+
+ /// Compute upper bound on page table pages needed for `num_virt_pages`.
+ ///
+ /// Walks from PTE level up through PDE levels, accumulating the tree.
+ fn pt_pages_upper_bound(num_virt_pages: usize) -> usize {
+ let mut total = 0;
+
+ // PTE pages at the leaf level.
+ let pte_epp = Self::entries_per_page(Self::PTE_LEVEL);
+ let mut pages_at_level = num_virt_pages.div_ceil(pte_epp);
+ total += pages_at_level;
+
+ // Walk PDE levels bottom-up (reverse of PDE_LEVELS).
+ for &level in Self::PDE_LEVELS.iter().rev() {
+ let epp = Self::entries_per_page(level);
+
+ // How many pages at this level do we need to point to
+ // the previous pages_at_level?
+ pages_at_level = pages_at_level.div_ceil(epp);
+ total += pages_at_level;
+ }
+
+ total
+ }
+}
+
+/// Marker struct for MMU v2 (Turing/Ampere/Ada).
+pub(super) struct MmuV2;
+
+impl MmuConfig for MmuV2 {
+ type Pte = ver2::Pte;
+ type Pde = ver2::Pde;
+ type DualPde = ver2::DualPde;
+
+ const PDE_LEVELS: &'static [PageTableLevel] = ver2::PDE_LEVELS;
+ const PTE_LEVEL: PageTableLevel = ver2::PTE_LEVEL;
+ const DUAL_PDE_LEVEL: PageTableLevel = ver2::DUAL_PDE_LEVEL;
+
+ fn entries_per_page(level: PageTableLevel) -> usize {
+ // TODO: Calculate these values from the bitfield dynamically
+ // instead of hardcoding them.
+ match level {
+ PageTableLevel::Pdb => 4, // PD3 root: bits [48:47] = 2 bits
+ PageTableLevel::L3 => 256, // PD0 dual: bits [28:21] = 8 bits
+ _ => 512, // PD2, PD1, PT: 9 bits each
+ }
+ }
+
+ fn level_index(va: VirtualAddress, level: u64) -> u64 {
+ ver2::VirtualAddressV2::new(va).level_index(level)
+ }
+}
+
+/// Marker struct for MMU v3 (Hopper and later).
+pub(super) struct MmuV3;
+
+impl MmuConfig for MmuV3 {
+ type Pte = ver3::Pte;
+ type Pde = ver3::Pde;
+ type DualPde = ver3::DualPde;
+
+ const PDE_LEVELS: &'static [PageTableLevel] = ver3::PDE_LEVELS;
+ const PTE_LEVEL: PageTableLevel = ver3::PTE_LEVEL;
+ const DUAL_PDE_LEVEL: PageTableLevel = ver3::DUAL_PDE_LEVEL;
+
+ fn entries_per_page(level: PageTableLevel) -> usize {
+ match level {
+ PageTableLevel::Pdb => 2, // PDE4 root: bit [56] = 1 bit, 2 entries
+ PageTableLevel::L4 => 256, // PDE0 dual: bits [28:21] = 8 bits
+ _ => 512, // PDE3, PDE2, PDE1, PT: 9 bits each
+ }
+ }
+
+ fn level_index(va: VirtualAddress, level: u64) -> u64 {
+ ver3::VirtualAddressV3::new(va).level_index(level)
+ }
+}
+
/// Memory aperture for Page Table Entries (`PTE`s).
///
/// Determines which memory region the `PTE` points to.
diff --git a/drivers/gpu/nova-core/mm/pagetable/ver2.rs b/drivers/gpu/nova-core/mm/pagetable/ver2.rs
index 419ca0e11cd6..66fd8c763146 100644
--- a/drivers/gpu/nova-core/mm/pagetable/ver2.rs
+++ b/drivers/gpu/nova-core/mm/pagetable/ver2.rs
@@ -16,7 +16,10 @@
use super::{
AperturePde,
AperturePte,
+ DualPdeOps,
PageTableLevel,
+ PdeOps,
+ PteOps,
VaLevelIndex, //
};
use crate::mm::{
@@ -116,12 +119,12 @@ pub(in crate::mm) struct Pte(u64) {
impl Pte {
/// Create a `PTE` from a `u64` value.
- pub(super) fn new(val: u64) -> Self {
+ pub(super) fn new_raw(val: u64) -> Self {
Self::from_raw(val)
}
/// Create a valid `PTE` for video memory.
- pub(super) fn new_vram(pfn: Pfn, writable: bool) -> Self {
+ fn new_vram_inner(pfn: Pfn, writable: bool) -> Self {
Self::zeroed()
.with_valid(true)
.with_aperture(AperturePte::VideoMemory)
@@ -129,21 +132,37 @@ pub(super) fn new_vram(pfn: Pfn, writable: bool) -> Self {
.with_read_only(!writable)
}
- /// Create an invalid `PTE`.
- pub(super) fn invalid() -> Self {
- Self::zeroed()
- }
-
/// Get the frame number based on aperture type.
- pub(super) fn frame_number(&self) -> Pfn {
+ fn frame_number_by_aperture(&self) -> Pfn {
match self.aperture() {
AperturePte::VideoMemory => self.frame_number_vid(),
_ => self.frame_number_sys(),
}
}
+}
- /// Get the raw `u64` value.
- pub(super) fn raw_u64(&self) -> u64 {
+impl PteOps for Pte {
+ fn new(val: u64) -> Self {
+ Self::from_raw(val)
+ }
+
+ fn invalid() -> Self {
+ Self::zeroed()
+ }
+
+ fn new_vram(pfn: Pfn, writable: bool) -> Self {
+ Self::new_vram_inner(pfn, writable)
+ }
+
+ fn is_valid(&self) -> bool {
+ self.valid().into_bool()
+ }
+
+ fn frame_number(&self) -> Pfn {
+ self.frame_number_by_aperture()
+ }
+
+ fn raw_u64(&self) -> u64 {
self.into_raw()
}
}
@@ -171,30 +190,18 @@ pub(in crate::mm) struct Pde(u64) {
impl Pde {
/// Create a `PDE` from a `u64` value.
- pub(super) fn new(val: u64) -> Self {
+ pub(super) fn new_raw(val: u64) -> Self {
Self::from_raw(val)
}
/// Create a valid `PDE` pointing to a page table in video memory.
- pub(super) fn new_vram(table_pfn: Pfn) -> Self {
+ fn new_vram_inner(table_pfn: Pfn) -> Self {
Self::zeroed()
.with_valid_inverted(false) // 0 = valid
.with_aperture(AperturePde::VideoMemory)
.with_table_frame_vid(table_pfn)
}
- /// Create an invalid `PDE`.
- pub(super) fn invalid() -> Self {
- Self::zeroed()
- .with_valid_inverted(true)
- .with_aperture(AperturePde::Invalid)
- }
-
- /// Check if this `PDE` is valid.
- pub(super) fn is_valid(&self) -> bool {
- !self.valid_inverted().into_bool() && self.aperture() != AperturePde::Invalid
- }
-
/// Get the table frame number based on aperture type.
fn table_frame(&self) -> Pfn {
match self.aperture() {
@@ -202,19 +209,42 @@ fn table_frame(&self) -> Pfn {
_ => self.table_frame_sys(),
}
}
+}
- /// Get the `VRAM` address of the page table.
- pub(super) fn table_vram_address(&self) -> VramAddress {
+impl PdeOps for Pde {
+ fn new(val: u64) -> Self {
+ Self::from_raw(val)
+ }
+
+ fn new_vram(table_pfn: Pfn) -> Self {
+ Self::new_vram_inner(table_pfn)
+ }
+
+ fn invalid() -> Self {
+ Self::zeroed()
+ .with_valid_inverted(true)
+ .with_aperture(AperturePde::Invalid)
+ }
+
+ fn is_valid(&self) -> bool {
+ !self.valid_inverted().into_bool() && self.aperture() != AperturePde::Invalid
+ }
+
+ fn aperture(&self) -> AperturePde {
+ // Delegate to bitfield getter (takes self by value, Copy).
+ Pde::aperture(*self)
+ }
+
+ fn table_vram_address(&self) -> VramAddress {
debug_assert!(
- self.aperture() == AperturePde::VideoMemory,
+ Pde::aperture(*self) == AperturePde::VideoMemory,
"table_vram_address called on non-VRAM PDE (aperture: {:?})",
- self.aperture()
+ Pde::aperture(*self)
);
VramAddress::from(self.table_frame_vid())
}
- /// Get the raw `u64` value of the `PDE`.
- pub(super) fn raw_u64(&self) -> u64 {
+ fn raw_u64(&self) -> u64 {
self.into_raw()
}
}
@@ -232,35 +262,9 @@ pub(in crate::mm) struct DualPde {
}
impl DualPde {
- /// Create a dual `PDE` from raw 128-bit value (two `u64`s).
- pub(super) fn new(big: u64, small: u64) -> Self {
- Self {
- big: Pde::new(big),
- small: Pde::new(small),
- }
- }
-
- /// Create a dual `PDE` with only the small page table pointer set.
- ///
- /// Note: The big (LPT) portion is set to 0, not `Pde::invalid()`.
- /// According to hardware documentation, clearing bit 0 of the 128-bit
- /// entry makes the PDE behave as a "normal" PDE. Using `Pde::invalid()`
- /// would set bit 0 (valid_inverted), which breaks page table walking.
- pub(super) fn new_small(table_pfn: Pfn) -> Self {
- Self {
- big: Pde::new(0),
- small: Pde::new_vram(table_pfn),
- }
- }
-
- /// Check if the small page table pointer is valid.
- pub(super) fn has_small(&self) -> bool {
- self.small.is_valid()
- }
-
/// Check if the big page table pointer is valid.
fn has_big(&self) -> bool {
- self.big.is_valid()
+ PdeOps::is_valid(&self.big)
}
/// Get the small page table `Pfn`.
@@ -268,3 +272,35 @@ fn small_pfn(&self) -> Pfn {
self.small.table_frame()
}
}
+
+impl DualPdeOps for DualPde {
+ fn new(big: u64, small: u64) -> Self {
+ Self {
+ big: PdeOps::new(big),
+ small: PdeOps::new(small),
+ }
+ }
+
+ fn new_small(table_pfn: Pfn) -> Self {
+ Self {
+ big: PdeOps::new(0),
+ small: PdeOps::new_vram(table_pfn),
+ }
+ }
+
+ fn has_small(&self) -> bool {
+ PdeOps::is_valid(&self.small)
+ }
+
+ fn small_vram_address(&self) -> VramAddress {
+ PdeOps::table_vram_address(&self.small)
+ }
+
+ fn big_raw_u64(&self) -> u64 {
+ PdeOps::raw_u64(&self.big)
+ }
+
+ fn small_raw_u64(&self) -> u64 {
+ PdeOps::raw_u64(&self.small)
+ }
+}
diff --git a/drivers/gpu/nova-core/mm/pagetable/ver3.rs b/drivers/gpu/nova-core/mm/pagetable/ver3.rs
index 2f9e762c4667..1c52013e498d 100644
--- a/drivers/gpu/nova-core/mm/pagetable/ver3.rs
+++ b/drivers/gpu/nova-core/mm/pagetable/ver3.rs
@@ -25,7 +25,10 @@
use super::{
AperturePde,
AperturePte,
+ DualPdeOps,
PageTableLevel,
+ PdeOps,
+ PteOps,
VaLevelIndex, //
};
use crate::mm::{
@@ -194,12 +197,12 @@ pub(in crate::mm) struct Pte(u64) {
impl Pte {
/// Create a PTE from a `u64` value.
- pub(super) fn new(val: u64) -> Self {
+ pub(super) fn new_raw(val: u64) -> Self {
Self::from_raw(val)
}
/// Create a valid PTE for video memory.
- pub(super) fn new_vram(frame: Pfn, writable: bool) -> Self {
+ fn new_vram_inner(frame: Pfn, writable: bool) -> Self {
let pcf = if writable { PtePcf::rw() } else { PtePcf::ro() };
Self::zeroed()
.with_valid(true)
@@ -207,14 +210,30 @@ pub(super) fn new_vram(frame: Pfn, writable: bool) -> Self {
.with_pcf(pcf)
.with_frame_number(frame)
}
+}
- /// Create an invalid PTE.
- pub(super) fn invalid() -> Self {
+impl PteOps for Pte {
+ fn new(val: u64) -> Self {
+ Self::from_raw(val)
+ }
+
+ fn invalid() -> Self {
Self::zeroed()
}
- /// Get the raw `u64` value.
- pub(super) fn raw_u64(&self) -> u64 {
+ fn new_vram(pfn: Pfn, writable: bool) -> Self {
+ Self::new_vram_inner(pfn, writable)
+ }
+
+ fn is_valid(&self) -> bool {
+ self.valid().into_bool()
+ }
+
+ fn frame_number(&self) -> Pfn {
+ Pte::frame_number(*self)
+ }
+
+ fn raw_u64(&self) -> u64 {
self.into_raw()
}
}
@@ -237,40 +256,50 @@ pub(in crate::mm) struct Pde(u64) {
impl Pde {
/// Create a PDE from a `u64` value.
- pub(super) fn new(val: u64) -> Self {
+ pub(super) fn new_raw(val: u64) -> Self {
Self::from_raw(val)
}
/// Create a valid PDE pointing to a page table in video memory.
- pub(super) fn new_vram(table_pfn: Pfn) -> Self {
+ fn new_vram_inner(table_pfn: Pfn) -> Self {
Self::zeroed()
.with_is_pte(false)
.with_aperture(AperturePde::VideoMemory)
.with_table_frame(table_pfn)
}
+}
- /// Create an invalid PDE.
- pub(super) fn invalid() -> Self {
+impl PdeOps for Pde {
+ fn new(val: u64) -> Self {
+ Self::from_raw(val)
+ }
+
+ fn new_vram(table_pfn: Pfn) -> Self {
+ Self::new_vram_inner(table_pfn)
+ }
+
+ fn invalid() -> Self {
Self::zeroed().with_aperture(AperturePde::Invalid)
}
- /// Check if this PDE is valid.
- pub(super) fn is_valid(&self) -> bool {
- self.aperture() != AperturePde::Invalid
+ fn is_valid(&self) -> bool {
+ Pde::aperture(*self) != AperturePde::Invalid
}
- /// Get the VRAM address of the page table.
- pub(super) fn table_vram_address(&self) -> VramAddress {
+ fn aperture(&self) -> AperturePde {
+ Pde::aperture(*self)
+ }
+
+ fn table_vram_address(&self) -> VramAddress {
debug_assert!(
- self.aperture() == AperturePde::VideoMemory,
+ Pde::aperture(*self) == AperturePde::VideoMemory,
"table_vram_address called on non-VRAM PDE (aperture: {:?})",
- self.aperture()
+ Pde::aperture(*self)
);
VramAddress::from(self.table_frame())
}
- /// Get the raw `u64` value.
- pub(super) fn raw_u64(&self) -> u64 {
+ fn raw_u64(&self) -> u64 {
self.into_raw()
}
}
@@ -363,29 +392,40 @@ pub(in crate::mm) struct DualPde {
unsafe impl Zeroable for DualPde {}
impl DualPde {
- /// Create a dual PDE from raw 128-bit value (two `u64`s).
- pub(super) fn new(big: u64, small: u64) -> Self {
- Self {
- big: DualPdeBig::new(big),
- small: Pde::new(small),
- }
- }
-
- /// Create a dual PDE with only the small page table pointer set.
- pub(super) fn new_small(table_pfn: Pfn) -> Self {
- Self {
- big: DualPdeBig::invalid(),
- small: Pde::new_vram(table_pfn),
- }
- }
-
- /// Check if the small page table pointer is valid.
- pub(super) fn has_small(&self) -> bool {
- self.small.is_valid()
- }
-
/// Check if the big page table pointer is valid.
fn has_big(&self) -> bool {
self.big.is_valid()
}
}
+
+impl DualPdeOps for DualPde {
+ fn new(big: u64, small: u64) -> Self {
+ Self {
+ big: DualPdeBig::new(big),
+ small: PdeOps::new(small),
+ }
+ }
+
+ fn new_small(table_pfn: Pfn) -> Self {
+ Self {
+ big: DualPdeBig::invalid(),
+ small: PdeOps::new_vram(table_pfn),
+ }
+ }
+
+ fn has_small(&self) -> bool {
+ PdeOps::is_valid(&self.small)
+ }
+
+ fn small_vram_address(&self) -> VramAddress {
+ PdeOps::table_vram_address(&self.small)
+ }
+
+ fn big_raw_u64(&self) -> u64 {
+ self.big.raw_u64()
+ }
+
+ fn small_raw_u64(&self) -> u64 {
+ PdeOps::raw_u64(&self.small)
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 13/22] gpu: nova-core: mm: Add page table walker for MMU v2/v3
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (11 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 12/22] gpu: nova-core: mm: Add page table entry operation traits Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 14/22] gpu: nova-core: mm: Add Virtual Memory Manager Joel Fernandes
` (8 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add the page table walker implementation that traverses the page table
hierarchy for both MMU v2 (5-level) and MMU v3 (6-level) to resolve
virtual addresses to physical addresses or find PTE locations.
Currently only v2 has been tested (nova-core currently boots pre-hopper)
with some initial preparatory work done for v3.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 1 +
drivers/gpu/nova-core/mm/pagetable/walk.rs | 258 +++++++++++++++++++++
2 files changed, 259 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/pagetable/walk.rs
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index 764b9e71ae41..b7e0e8e02905 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -10,6 +10,7 @@
pub(super) mod ver2;
pub(super) mod ver3;
+pub(super) mod walk;
use kernel::num::Bounded;
diff --git a/drivers/gpu/nova-core/mm/pagetable/walk.rs b/drivers/gpu/nova-core/mm/pagetable/walk.rs
new file mode 100644
index 000000000000..e400b8a203be
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable/walk.rs
@@ -0,0 +1,258 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Page table walker implementation for NVIDIA GPUs.
+//!
+//! This module provides page table walking functionality for MMU v2 and v3.
+//! The walker traverses the page table hierarchy to resolve virtual addresses
+//! to physical addresses or to find PTE locations.
+//!
+//! # Page Table Hierarchy
+//!
+//! ## MMU v2 (Turing/Ampere/Ada) - 5 levels
+//!
+//! ```text
+//! +-------+ +-------+ +-------+ +---------+ +-------+
+//! | PDB |---->| L1 |---->| L2 |---->| L3 Dual |---->| L4 |
+//! | (L0) | | | | | | PDE | | (PTE) |
+//! +-------+ +-------+ +-------+ +---------+ +-------+
+//! 64-bit 64-bit 64-bit 128-bit 64-bit
+//! PDE PDE PDE (big+small) PTE
+//! ```
+//!
+//! ## MMU v3 (Hopper+) - 6 levels
+//!
+//! ```text
+//! +-------+ +-------+ +-------+ +-------+ +---------+ +-------+
+//! | PDB |---->| L1 |---->| L2 |---->| L3 |---->| L4 Dual |---->| L5 |
+//! | (L0) | | | | | | | | PDE | | (PTE) |
+//! +-------+ +-------+ +-------+ +-------+ +---------+ +-------+
+//! 64-bit 64-bit 64-bit 64-bit 128-bit 64-bit
+//! PDE PDE PDE PDE (big+small) PTE
+//! ```
+//!
+//! # Result of a page table walk
+//!
+//! The walker returns a [`WalkResult`] indicating the outcome.
+
+use core::marker::PhantomData;
+
+use kernel::{
+ device,
+ prelude::*, //
+};
+
+use super::{
+ DualPdeOps,
+ MmuConfig,
+ MmuV2,
+ MmuV3,
+ MmuVersion,
+ PageTableLevel,
+ PdeOps,
+ PteOps, //
+};
+use crate::{
+ mm::{
+ pramin,
+ GpuMm,
+ Pfn,
+ Vfn,
+ VirtualAddress,
+ VramAddress, //
+ },
+ num::{
+ IntoSafeCast, //
+ },
+};
+
+/// Result of walking to a PTE.
+#[derive(Debug, Clone, Copy)]
+pub(in crate::mm) enum WalkResult {
+ /// Intermediate page tables are missing (only returned in lookup mode).
+ PageTableMissing,
+ /// PTE exists but is invalid (page not mapped).
+ Unmapped { pte_addr: VramAddress },
+ /// PTE exists and is valid (page is mapped).
+ Mapped { pte_addr: VramAddress, pfn: Pfn },
+}
+
+/// Result of walking PDE levels only.
+///
+/// Returned by [`PtWalkInner::walk_pde_levels()`] to indicate whether all PDE
+/// levels resolved or a PDE is missing.
+#[derive(Debug, Clone, Copy)]
+pub(in crate::mm) enum WalkPdeResult {
+ /// All PDE levels resolved -- returns PTE page table address.
+ Complete {
+ /// VRAM address of the PTE-level page table.
+ pte_table: VramAddress,
+ },
+ /// A PDE is missing and no prepared page was provided by the closure.
+ Missing {
+ /// PDE slot address in the parent page table (where to install).
+ install_addr: VramAddress,
+ /// The page table level that is missing.
+ level: PageTableLevel,
+ },
+}
+
+/// Page table walker.
+pub(in crate::mm) struct PtWalkInner<M: MmuConfig> {
+ pdb_addr: VramAddress,
+ _phantom: PhantomData<M>,
+}
+
+impl<M: MmuConfig> PtWalkInner<M> {
+ /// Calculate the VRAM address of an entry within a page table.
+ fn entry_addr(table: VramAddress, level: PageTableLevel, index: u64) -> VramAddress {
+ let entry_size: u64 = M::entry_size(level).into_safe_cast();
+ VramAddress::new(table.raw_u64() + index * entry_size)
+ }
+
+ /// Create a new page table walker.
+ pub(super) fn new(pdb_addr: VramAddress) -> Self {
+ Self {
+ pdb_addr,
+ _phantom: PhantomData,
+ }
+ }
+
+ /// Walk PDE levels with closure-based resolution for missing PDEs.
+ ///
+ /// Traverses all PDE levels for the MMU version. At each level, reads the PDE.
+ /// If valid, extracts the child table address and continues. If missing, calls
+ /// `resolve_prepared(install_addr)` to resolve the missing PDE.
+ pub(super) fn walk_pde_levels(
+ &self,
+ window: &mut pramin::PraminWindow<'_>,
+ vfn: Vfn,
+ resolve_prepared: impl Fn(VramAddress) -> Option<VramAddress>,
+ ) -> Result<WalkPdeResult> {
+ let va = VirtualAddress::from(vfn);
+ let mut cur_table = self.pdb_addr;
+
+ for &level in M::PDE_LEVELS {
+ let idx = M::level_index(va, level.as_index());
+ let install_addr = Self::entry_addr(cur_table, level, idx);
+
+ if level == M::DUAL_PDE_LEVEL {
+ // 128-bit dual PDE with big+small page table pointers.
+ let dpde = M::DualPde::read(window, install_addr)?;
+ if dpde.has_small() {
+ cur_table = dpde.small_vram_address();
+ continue;
+ }
+ } else {
+ // Regular 64-bit PDE. Use `is_valid_vram()` because
+ // `table_vram_address()` only reads the VRAM frame-number
+ // bitfield; system-memory PDEs store the address in a
+ // different (wider) field and would be silently truncated.
+ let pde = M::Pde::read(window, install_addr)?;
+ if pde.is_valid_vram() {
+ cur_table = pde.table_vram_address();
+ continue;
+ }
+ }
+
+ // PDE missing in HW. Ask caller for resolution.
+ if let Some(prepared_addr) = resolve_prepared(install_addr) {
+ cur_table = prepared_addr;
+ continue;
+ }
+
+ return Ok(WalkPdeResult::Missing {
+ install_addr,
+ level,
+ });
+ }
+
+ Ok(WalkPdeResult::Complete {
+ pte_table: cur_table,
+ })
+ }
+
+ /// Walk to PTE for lookup only (no allocation).
+ ///
+ /// Returns [`WalkResult::PageTableMissing`] if intermediate tables don't exist.
+ pub(super) fn walk_to_pte_lookup(
+ &self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ vfn: Vfn,
+ ) -> Result<WalkResult> {
+ let mut window = mm.pramin().get_window(dev)?;
+ self.walk_to_pte_lookup_with_window(&mut window, vfn)
+ }
+
+ /// Walk to PTE using a caller-provided PRAMIN window (lookup only).
+ pub(super) fn walk_to_pte_lookup_with_window(
+ &self,
+ window: &mut pramin::PraminWindow<'_>,
+ vfn: Vfn,
+ ) -> Result<WalkResult> {
+ match self.walk_pde_levels(window, vfn, |_| None)? {
+ WalkPdeResult::Complete { pte_table } => {
+ Self::read_pte_at_level(window, vfn, pte_table)
+ }
+ WalkPdeResult::Missing { .. } => Ok(WalkResult::PageTableMissing),
+ }
+ }
+
+ /// Read the PTE at the PTE level given the PTE table address.
+ fn read_pte_at_level(
+ window: &mut pramin::PraminWindow<'_>,
+ vfn: Vfn,
+ pte_table: VramAddress,
+ ) -> Result<WalkResult> {
+ let va = VirtualAddress::from(vfn);
+ let pte_level = M::PTE_LEVEL;
+ let pte_idx = M::level_index(va, pte_level.as_index());
+ let pte_addr = Self::entry_addr(pte_table, pte_level, pte_idx);
+ let pte = M::Pte::read(window, pte_addr)?;
+
+ if pte.is_valid() {
+ return Ok(WalkResult::Mapped {
+ pte_addr,
+ pfn: pte.frame_number(),
+ });
+ }
+ Ok(WalkResult::Unmapped { pte_addr })
+ }
+}
+
+macro_rules! pt_walk_dispatch {
+ ($self:expr, $method:ident ( $($arg:expr),* $(,)? )) => {
+ match $self {
+ PtWalk::V2(inner) => inner.$method($($arg),*),
+ PtWalk::V3(inner) => inner.$method($($arg),*),
+ }
+ };
+}
+
+/// Page table walker dispatch.
+pub(in crate::mm) enum PtWalk {
+ /// MMU v2 (Turing/Ampere/Ada).
+ V2(PtWalkInner<MmuV2>),
+ /// MMU v3 (Hopper+).
+ V3(PtWalkInner<MmuV3>),
+}
+
+impl PtWalk {
+ /// Create a new page table walker for the given MMU version.
+ pub(in crate::mm) fn new(pdb_addr: VramAddress, version: MmuVersion) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(PtWalkInner::<MmuV2>::new(pdb_addr)),
+ MmuVersion::V3 => Self::V3(PtWalkInner::<MmuV3>::new(pdb_addr)),
+ }
+ }
+
+ /// Walk to PTE for lookup.
+ pub(in crate::mm) fn walk_to_pte(
+ &self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ vfn: Vfn,
+ ) -> Result<WalkResult> {
+ pt_walk_dispatch!(self, walk_to_pte_lookup(dev, mm, vfn))
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 14/22] gpu: nova-core: mm: Add Virtual Memory Manager
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (12 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 13/22] gpu: nova-core: mm: Add page table walker for MMU v2/v3 Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 15/22] gpu: nova-core: mm: Add virtual address range tracking to VMM Joel Fernandes
` (7 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add the Virtual Memory Manager (VMM) infrastructure for GPU address
space management. Each Vmm instance manages a single address space
identified by its Page Directory Base (PDB) address, used for Channel,
BAR1 and BAR2 mappings.
Mapping APIs and virtual address range tracking are added in later
commits.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/vmm.rs | 69 +++++++++++++++++++++++++++++++++
2 files changed, 70 insertions(+)
create mode 100644 drivers/gpu/nova-core/mm/vmm.rs
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index 045e35c92b78..88d0b436de4e 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -35,6 +35,7 @@ macro_rules! impl_pfn_bounded {
pub(super) mod pagetable;
pub(crate) mod pramin;
pub(super) mod tlb;
+pub(super) mod vmm;
use kernel::{
bitfield,
diff --git a/drivers/gpu/nova-core/mm/vmm.rs b/drivers/gpu/nova-core/mm/vmm.rs
new file mode 100644
index 000000000000..7a2ab2acaa9b
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/vmm.rs
@@ -0,0 +1,69 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Virtual Memory Manager for NVIDIA GPU page table management.
+//!
+//! The [`Vmm`] provides high-level page mapping and unmapping operations for GPU
+//! virtual address spaces (Channels, BAR1, BAR2). It wraps the page table walker
+//! and handles TLB flushing after modifications.
+
+use kernel::{
+ device,
+ gpu::buddy::AllocatedBlocks,
+ prelude::*, //
+};
+
+use crate::mm::{
+ pagetable::{
+ walk::{PtWalk, WalkResult},
+ MmuVersion, //
+ },
+ GpuMm,
+ Pfn,
+ Vfn,
+ VramAddress, //
+};
+
+/// Virtual Memory Manager for a GPU address space.
+///
+/// Each [`Vmm`] instance manages a single address space identified by its Page
+/// Directory Base (`PDB`) address. The [`Vmm`] is used for Channel, BAR1 and
+/// BAR2 mappings.
+pub(crate) struct Vmm {
+ /// Page Directory Base address for this address space.
+ pdb_addr: VramAddress,
+ /// MMU version used for page table layout.
+ mmu_version: MmuVersion,
+ /// Page table allocations required for mappings.
+ page_table_allocs: KVec<Pin<KBox<AllocatedBlocks>>>,
+}
+
+impl Vmm {
+ /// Create a new [`Vmm`] for the given Page Directory Base address.
+ pub(crate) fn new(pdb_addr: VramAddress, mmu_version: MmuVersion) -> Result<Self> {
+ // Only MMU v2 is supported for now.
+ if mmu_version != MmuVersion::V2 {
+ return Err(ENOTSUPP);
+ }
+
+ Ok(Self {
+ pdb_addr,
+ mmu_version,
+ page_table_allocs: KVec::new(),
+ })
+ }
+
+ /// Read the [`Pfn`] for a mapped [`Vfn`] if one is mapped.
+ pub(super) fn read_mapping(
+ &self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ vfn: Vfn,
+ ) -> Result<Option<Pfn>> {
+ let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
+
+ match walker.walk_to_pte(dev, mm, vfn)? {
+ WalkResult::Mapped { pfn, .. } => Ok(Some(pfn)),
+ WalkResult::Unmapped { .. } | WalkResult::PageTableMissing => Ok(None),
+ }
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 15/22] gpu: nova-core: mm: Add virtual address range tracking to VMM
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (13 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 14/22] gpu: nova-core: mm: Add Virtual Memory Manager Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 16/22] gpu: nova-core: mm: Add multi-page mapping API " Joel Fernandes
` (6 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add virtual address range tracking to the VMM using a buddy allocator.
This enables contiguous virtual address range allocation for mappings.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/vmm.rs | 89 +++++++++++++++++++++++++++++----
1 file changed, 79 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/nova-core/mm/vmm.rs b/drivers/gpu/nova-core/mm/vmm.rs
index 7a2ab2acaa9b..81ae08ce1cd7 100644
--- a/drivers/gpu/nova-core/mm/vmm.rs
+++ b/drivers/gpu/nova-core/mm/vmm.rs
@@ -9,18 +9,27 @@
use kernel::{
device,
gpu::buddy::AllocatedBlocks,
- prelude::*, //
+ maple_tree::MapleTreeAlloc,
+ prelude::*,
};
-use crate::mm::{
- pagetable::{
- walk::{PtWalk, WalkResult},
- MmuVersion, //
+use core::ops::Range;
+
+use crate::{
+ mm::{
+ pagetable::{
+ walk::{PtWalk, WalkResult},
+ MmuVersion, //
+ },
+ GpuMm,
+ Pfn,
+ Vfn,
+ VramAddress,
+ PAGE_SIZE, //
+ },
+ num::{
+ IntoSafeCast, //
},
- GpuMm,
- Pfn,
- Vfn,
- VramAddress, //
};
/// Virtual Memory Manager for a GPU address space.
@@ -35,23 +44,83 @@ pub(crate) struct Vmm {
mmu_version: MmuVersion,
/// Page table allocations required for mappings.
page_table_allocs: KVec<Pin<KBox<AllocatedBlocks>>>,
+ /// Maple tree allocator for virtual address range tracking.
+ virt_alloc: Pin<KBox<MapleTreeAlloc<()>>>,
+ /// Total number of pages in the virtual address space.
+ va_pages: usize,
}
impl Vmm {
/// Create a new [`Vmm`] for the given Page Directory Base address.
- pub(crate) fn new(pdb_addr: VramAddress, mmu_version: MmuVersion) -> Result<Self> {
+ ///
+ /// The [`Vmm`] will manage a virtual address space of `va_size` bytes.
+ pub(crate) fn new(
+ pdb_addr: VramAddress,
+ mmu_version: MmuVersion,
+ va_size: u64,
+ ) -> Result<Self> {
// Only MMU v2 is supported for now.
if mmu_version != MmuVersion::V2 {
return Err(ENOTSUPP);
}
+ let page_size: u64 = PAGE_SIZE.into_safe_cast();
+ let va_pages: usize = (va_size / page_size).into_safe_cast();
+ let virt_alloc = KBox::pin_init(MapleTreeAlloc::<()>::new(), GFP_KERNEL)?;
+
Ok(Self {
pdb_addr,
mmu_version,
page_table_allocs: KVec::new(),
+ virt_alloc,
+ va_pages,
})
}
+ /// Allocate a contiguous virtual frame number range.
+ ///
+ /// # Arguments
+ ///
+ /// - `num_pages`: Number of pages to allocate.
+ /// - `va_range`: `None` = allocate anywhere, `Some(range)` = constrain allocation to the given
+ /// range.
+ fn alloc_vfn_range(
+ &self,
+ num_pages: usize,
+ va_range: Option<Range<u64>>,
+ ) -> Result<Vfn> {
+ let page_size: u64 = PAGE_SIZE.into_safe_cast();
+
+ let start_vfn = match va_range {
+ Some(r) => {
+ let num_pages_u64: u64 = num_pages.into_safe_cast();
+ let size = num_pages_u64.checked_mul(page_size).ok_or(EOVERFLOW)?;
+ let range_size = r.end.checked_sub(r.start).ok_or(EOVERFLOW)?;
+ if range_size != size {
+ return Err(EINVAL);
+ }
+ let start_vfn: usize = (r.start / page_size).into_safe_cast();
+ let end_vfn: usize = (r.end / page_size).into_safe_cast();
+ self.virt_alloc
+ .insert_range(start_vfn..end_vfn, (), GFP_KERNEL)?;
+ start_vfn
+ }
+ None => self
+ .virt_alloc
+ .alloc_range(num_pages, (), ..self.va_pages, GFP_KERNEL)?,
+ };
+
+ Ok(Vfn::new(start_vfn.into_safe_cast()))
+ }
+
+ /// Free a virtual frame number range back to the maple tree.
+ fn free_vfn(&self, vfn: Vfn) {
+ let vfn_index: usize = vfn.raw().into_safe_cast();
+ if self.virt_alloc.erase(vfn_index).is_none() {
+ kernel::pr_warn!("free_vfn: VFN {} not found in maple tree\n", vfn_index);
+ }
+ }
+
/// Read the [`Pfn`] for a mapped [`Vfn`] if one is mapped.
pub(super) fn read_mapping(
&self,
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 16/22] gpu: nova-core: mm: Add multi-page mapping API to VMM
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (14 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 15/22] gpu: nova-core: mm: Add virtual address range tracking to VMM Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 17/22] gpu: nova-core: Add BAR1 aperture type and size constant Joel Fernandes
` (5 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add the page table mapping and unmapping API to the Virtual Memory
Manager, implementing a two-phase prepare/execute model suitable for
use both inside and outside the DMA fence signalling critical path.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pagetable.rs | 1 +
drivers/gpu/nova-core/mm/pagetable/map.rs | 355 ++++++++++++++++++++++
drivers/gpu/nova-core/mm/vmm.rs | 268 ++++++++++++++--
3 files changed, 606 insertions(+), 18 deletions(-)
create mode 100644 drivers/gpu/nova-core/mm/pagetable/map.rs
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index b7e0e8e02905..4070070922a4 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -8,6 +8,7 @@
#![expect(dead_code)]
+pub(super) mod map;
pub(super) mod ver2;
pub(super) mod ver3;
pub(super) mod walk;
diff --git a/drivers/gpu/nova-core/mm/pagetable/map.rs b/drivers/gpu/nova-core/mm/pagetable/map.rs
new file mode 100644
index 000000000000..16af491472db
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable/map.rs
@@ -0,0 +1,355 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Page table mapping operations for NVIDIA GPUs.
+
+use core::marker::PhantomData;
+
+use kernel::{
+ device,
+ gpu::buddy::{
+ AllocatedBlocks,
+ GpuBuddyAllocFlags,
+ GpuBuddyAllocMode, //
+ },
+ prelude::*,
+ ptr::Alignment,
+ rbtree::{RBTree, RBTreeNode},
+ sizes::SZ_4K, //
+};
+
+use super::{
+ walk::{
+ PtWalkInner,
+ WalkPdeResult,
+ WalkResult, //
+ },
+ DualPdeOps,
+ MmuConfig,
+ MmuV2,
+ MmuV3,
+ MmuVersion,
+ PageTableLevel,
+ PdeOps,
+ PteOps, //
+};
+use crate::{
+ mm::{
+ GpuMm,
+ Pfn,
+ Vfn,
+ VramAddress,
+ PAGE_SIZE, //
+ },
+ num::{
+ IntoSafeCast, //
+ },
+};
+
+/// A pre-allocated and zeroed page table page.
+///
+/// Created during the mapping prepare phase and consumed during the execute phase.
+/// Stored in an [`RBTree`] keyed by the PDE slot address (`install_addr`).
+pub(in crate::mm) struct PreparedPtPage {
+ /// The allocated and zeroed page table page.
+ pub(in crate::mm) alloc: Pin<KBox<AllocatedBlocks>>,
+ /// Page table level -- needed to determine if this PT page is for a dual PDE.
+ pub(in crate::mm) level: PageTableLevel,
+}
+
+/// Page table mapper.
+pub(in crate::mm) struct PtMapInner<M: MmuConfig> {
+ walker: PtWalkInner<M>,
+ pdb_addr: VramAddress,
+ _phantom: PhantomData<M>,
+}
+
+impl<M: MmuConfig> PtMapInner<M> {
+ /// Create a new [`PtMapInner`].
+ pub(super) fn new(pdb_addr: VramAddress) -> Self {
+ Self {
+ walker: PtWalkInner::<M>::new(pdb_addr),
+ pdb_addr,
+ _phantom: PhantomData,
+ }
+ }
+
+ /// Allocate and zero a physical page table page.
+ fn alloc_and_zero_page(
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ level: PageTableLevel,
+ ) -> Result<PreparedPtPage> {
+ let blocks = KBox::pin_init(
+ mm.buddy().alloc_blocks(
+ GpuBuddyAllocMode::Simple,
+ SZ_4K.into_safe_cast(),
+ Alignment::new::<SZ_4K>(),
+ GpuBuddyAllocFlags::default(),
+ ),
+ GFP_KERNEL,
+ )?;
+
+ let page_vram = VramAddress::new(blocks.iter().next().ok_or(ENOMEM)?.offset());
+
+ // Zero via PRAMIN.
+ let mut window = mm.pramin().get_window(dev)?;
+ let base = page_vram.raw();
+ for off in (0..PAGE_SIZE).step_by(8) {
+ window.try_write64(base + off, 0)?;
+ }
+
+ Ok(PreparedPtPage {
+ alloc: blocks,
+ level,
+ })
+ }
+
+ /// Ensure all intermediate page table pages exist for a single VFN.
+ ///
+ /// PRAMIN is released before each allocation and re-acquired after. Memory
+ /// allocations are done outside of holding this lock to prevent deadlocks with
+ /// the fence signalling critical path.
+ fn ensure_single_pte_path(
+ &self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ vfn: Vfn,
+ pt_pages: &mut RBTree<VramAddress, PreparedPtPage>,
+ ) -> Result {
+ let max_iter = 2 * M::PDE_LEVELS.len();
+
+ for _ in 0..max_iter {
+ let mut window = mm.pramin().get_window(dev)?;
+
+ let result = self
+ .walker
+ .walk_pde_levels(&mut window, vfn, |install_addr| {
+ pt_pages
+ .get(&install_addr)
+ .and_then(|p| p.alloc.iter().next().map(|b| VramAddress::new(b.offset())))
+ })?;
+
+ match result {
+ WalkPdeResult::Complete { .. } => {
+ return Ok(());
+ }
+ WalkPdeResult::Missing {
+ install_addr,
+ level,
+ } => {
+ // Drop PRAMIN before allocation.
+ drop(window);
+ let page = Self::alloc_and_zero_page(dev, mm, level)?;
+ let node = RBTreeNode::new(install_addr, page, GFP_KERNEL)?;
+ let old = pt_pages.insert(node);
+ if old.is_some() {
+ kernel::pr_warn_once!(
+ "VMM: duplicate install_addr in pt_pages (internal consistency error)\n"
+ );
+ return Err(EIO);
+ }
+ }
+ }
+ }
+
+ kernel::pr_warn!(
+ "VMM: ensure_pte_path: loop exhausted after {} iters (VFN {:?})\n",
+ max_iter,
+ vfn
+ );
+ Err(EIO)
+ }
+
+ /// Prepare page table resources for mapping `num_pages` pages starting at `vfn_start`.
+ ///
+ /// Reserves capacity in `page_table_allocs`, then walks the hierarchy
+ /// per-VFN to prepare pages for all missing PDEs.
+ pub(super) fn prepare_map(
+ &self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ vfn_start: Vfn,
+ num_pages: usize,
+ page_table_allocs: &mut KVec<Pin<KBox<AllocatedBlocks>>>,
+ pt_pages: &mut RBTree<VramAddress, PreparedPtPage>,
+ ) -> Result {
+ // Pre-reserve so install_mappings() can use push_within_capacity (no alloc
+ // in fence signalling critical path).
+ let pt_upper_bound = M::pt_pages_upper_bound(num_pages);
+ page_table_allocs.reserve(pt_upper_bound, GFP_KERNEL)?;
+
+ // Walk the hierarchy per-VFN to prepare pages for all missing PDEs.
+ for i in 0..num_pages {
+ let i_u64: u64 = i.into_safe_cast();
+ let vfn = Vfn::new(vfn_start.raw() + i_u64);
+ self.ensure_single_pte_path(dev, mm, vfn, pt_pages)?;
+ }
+ Ok(())
+ }
+
+ /// Install prepared PDEs and write PTEs, then flush TLB.
+ ///
+ /// Drains `pt_pages` and moves allocations into `page_table_allocs`.
+ pub(super) fn install_mappings(
+ &self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ pt_pages: &mut RBTree<VramAddress, PreparedPtPage>,
+ page_table_allocs: &mut KVec<Pin<KBox<AllocatedBlocks>>>,
+ vfn_start: Vfn,
+ pfns: &[Pfn],
+ writable: bool,
+ ) -> Result {
+ let mut window = mm.pramin().get_window(dev)?;
+
+ // Drain prepared PT pages, install all pending PDEs.
+ let mut cursor = pt_pages.cursor_front_mut();
+ while let Some(c) = cursor {
+ let (next, node) = c.remove_current();
+ let (install_addr, page) = node.to_key_value();
+ let page_vram = VramAddress::new(page.alloc.iter().next().ok_or(ENOMEM)?.offset());
+
+ if page.level == M::DUAL_PDE_LEVEL {
+ let new_dpde = M::DualPde::new_small(Pfn::from(page_vram));
+ new_dpde.write(&mut window, install_addr)?;
+ } else {
+ let new_pde = M::Pde::new_vram(Pfn::from(page_vram));
+ new_pde.write(&mut window, install_addr)?;
+ }
+
+ page_table_allocs
+ .push_within_capacity(page.alloc)
+ .map_err(|_| ENOMEM)?;
+
+ cursor = next;
+ }
+
+ // Write PTEs (all PDEs now installed in HW).
+ for (i, &pfn) in pfns.iter().enumerate() {
+ let i_u64: u64 = i.into_safe_cast();
+ let vfn = Vfn::new(vfn_start.raw() + i_u64);
+ let result = self
+ .walker
+ .walk_to_pte_lookup_with_window(&mut window, vfn)?;
+
+ match result {
+ WalkResult::Unmapped { pte_addr } | WalkResult::Mapped { pte_addr, .. } => {
+ let pte = M::Pte::new_vram(pfn, writable);
+ pte.write(&mut window, pte_addr)?;
+ }
+ WalkResult::PageTableMissing => {
+ kernel::pr_warn_once!("VMM: page table missing for VFN {vfn:?}\n");
+ return Err(EIO);
+ }
+ }
+ }
+
+ drop(window);
+
+ // Flush TLB.
+ mm.tlb().flush(dev, self.pdb_addr)
+ }
+
+ /// Invalidate PTEs for a range and flush TLB.
+ pub(super) fn invalidate_ptes(
+ &self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ vfn_start: Vfn,
+ num_pages: usize,
+ ) -> Result {
+ let invalid_pte = M::Pte::invalid();
+
+ let mut window = mm.pramin().get_window(dev)?;
+ for i in 0..num_pages {
+ let i_u64: u64 = i.into_safe_cast();
+ let vfn = Vfn::new(vfn_start.raw() + i_u64);
+ let result = self
+ .walker
+ .walk_to_pte_lookup_with_window(&mut window, vfn)?;
+
+ match result {
+ WalkResult::Mapped { pte_addr, .. } | WalkResult::Unmapped { pte_addr } => {
+ invalid_pte.write(&mut window, pte_addr)?;
+ }
+ WalkResult::PageTableMissing => {
+ continue;
+ }
+ }
+ }
+ drop(window);
+
+ mm.tlb().flush(dev, self.pdb_addr)
+ }
+}
+
+macro_rules! pt_map_dispatch {
+ ($self:expr, $method:ident ( $($arg:expr),* $(,)? )) => {
+ match $self {
+ PtMap::V2(inner) => inner.$method($($arg),*),
+ PtMap::V3(inner) => inner.$method($($arg),*),
+ }
+ };
+}
+
+/// Page table mapper dispatch.
+pub(in crate::mm) enum PtMap {
+ /// MMU v2 (Turing/Ampere/Ada).
+ V2(PtMapInner<MmuV2>),
+ /// MMU v3 (Hopper+).
+ V3(PtMapInner<MmuV3>),
+}
+
+impl PtMap {
+ /// Create a new page table mapper for the given MMU version.
+ pub(in crate::mm) fn new(pdb_addr: VramAddress, version: MmuVersion) -> Self {
+ match version {
+ MmuVersion::V2 => Self::V2(PtMapInner::<MmuV2>::new(pdb_addr)),
+ MmuVersion::V3 => Self::V3(PtMapInner::<MmuV3>::new(pdb_addr)),
+ }
+ }
+
+ /// Prepare page table resources for a mapping.
+ pub(in crate::mm) fn prepare_map(
+ &self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ vfn_start: Vfn,
+ num_pages: usize,
+ page_table_allocs: &mut KVec<Pin<KBox<AllocatedBlocks>>>,
+ pt_pages: &mut RBTree<VramAddress, PreparedPtPage>,
+ ) -> Result {
+ pt_map_dispatch!(
+ self,
+ prepare_map(dev, mm, vfn_start, num_pages, page_table_allocs, pt_pages)
+ )
+ }
+
+ /// Install prepared PDEs and write PTEs, then flush TLB.
+ pub(in crate::mm) fn install_mappings(
+ &self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ pt_pages: &mut RBTree<VramAddress, PreparedPtPage>,
+ page_table_allocs: &mut KVec<Pin<KBox<AllocatedBlocks>>>,
+ vfn_start: Vfn,
+ pfns: &[Pfn],
+ writable: bool,
+ ) -> Result {
+ pt_map_dispatch!(
+ self,
+ install_mappings(dev, mm, pt_pages, page_table_allocs, vfn_start, pfns, writable)
+ )
+ }
+
+ /// Invalidate PTEs for a range and flush TLB.
+ pub(in crate::mm) fn invalidate_ptes(
+ &self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ vfn_start: Vfn,
+ num_pages: usize,
+ ) -> Result {
+ pt_map_dispatch!(self, invalidate_ptes(dev, mm, vfn_start, num_pages))
+ }
+}
diff --git a/drivers/gpu/nova-core/mm/vmm.rs b/drivers/gpu/nova-core/mm/vmm.rs
index 81ae08ce1cd7..57269409147d 100644
--- a/drivers/gpu/nova-core/mm/vmm.rs
+++ b/drivers/gpu/nova-core/mm/vmm.rs
@@ -3,22 +3,31 @@
//! Virtual Memory Manager for NVIDIA GPU page table management.
//!
//! The [`Vmm`] provides high-level page mapping and unmapping operations for GPU
-//! virtual address spaces (Channels, BAR1, BAR2). It wraps the page table walker
-//! and handles TLB flushing after modifications.
+//! virtual address spaces (Channels, BAR1, BAR2).
use kernel::{
device,
gpu::buddy::AllocatedBlocks,
maple_tree::MapleTreeAlloc,
prelude::*,
+ rbtree::RBTree, //
};
-use core::ops::Range;
+use core::{
+ cell::Cell,
+ ops::Range, //
+};
use crate::{
mm::{
pagetable::{
- walk::{PtWalk, WalkResult},
+ map::{
+ PtMap, //
+ },
+ walk::{
+ PtWalk,
+ WalkResult, //
+ },
MmuVersion, //
},
GpuMm,
@@ -32,22 +41,108 @@
},
};
+/// Multi-page prepared mapping -- VA range allocated, ready for execute.
+///
+/// Produced by [`Vmm::prepare_map()`], consumed by [`Vmm::execute_map()`].
+/// The VA space allocation is tracked in the [`Vmm`]'s maple tree and freed
+/// on error or via [`Vmm::unmap_pages()`].
+///
+/// Dropping without calling [`Vmm::execute_map()`] logs a warning and leaks
+/// the VA range in the maple tree.
+pub(crate) struct PreparedMapping {
+ vfn_start: Vfn,
+ num_pages: usize,
+ /// Logs a warning if dropped without executing.
+ _drop_guard: MustExecuteGuard,
+}
+
+/// Result of a mapping operation -- tracks the active mapped range.
+///
+/// Returned by [`Vmm::execute_map()`] and [`Vmm::map_pages()`].
+/// Callers must call [`Vmm::unmap_pages()`] before dropping to invalidate
+/// PTEs and free the VA range. Dropping without unmapping logs a warning
+/// and leaks the VA range in the maple tree.
+pub(crate) struct MappedRange {
+ pub(super) vfn_start: Vfn,
+ pub(super) num_pages: usize,
+ /// Logs a warning if dropped without unmapping.
+ _drop_guard: MustUnmapGuard,
+}
+
+/// Guard that logs a warning if a [`PreparedMapping`] is dropped without
+/// being consumed by [`Vmm::execute_map()`].
+struct MustExecuteGuard {
+ armed: Cell<bool>,
+}
+
+impl MustExecuteGuard {
+ const fn new() -> Self {
+ Self {
+ armed: Cell::new(true),
+ }
+ }
+
+ fn disarm(&self) {
+ self.armed.set(false);
+ }
+}
+
+impl Drop for MustExecuteGuard {
+ fn drop(&mut self) {
+ if self.armed.get() {
+ kernel::pr_warn!("PreparedMapping dropped without calling execute_map()\n");
+ }
+ }
+}
+
+/// Guard that logs a warning if a [`MappedRange`] is dropped without
+/// calling [`Vmm::unmap_pages()`].
+struct MustUnmapGuard {
+ armed: Cell<bool>,
+}
+
+impl MustUnmapGuard {
+ const fn new() -> Self {
+ Self {
+ armed: Cell::new(true),
+ }
+ }
+
+ fn disarm(&self) {
+ self.armed.set(false);
+ }
+}
+
+impl Drop for MustUnmapGuard {
+ fn drop(&mut self) {
+ if self.armed.get() {
+ kernel::pr_warn!("MappedRange dropped without calling unmap_pages()\n");
+ }
+ }
+}
+
/// Virtual Memory Manager for a GPU address space.
///
/// Each [`Vmm`] instance manages a single address space identified by its Page
-/// Directory Base (`PDB`) address. The [`Vmm`] is used for Channel, BAR1 and
-/// BAR2 mappings.
+/// Directory Base (`PDB`) address. Used for Channel, BAR1 and BAR2 mappings.
pub(crate) struct Vmm {
/// Page Directory Base address for this address space.
pdb_addr: VramAddress,
- /// MMU version used for page table layout.
- mmu_version: MmuVersion,
+ /// Page table walker for reading existing mappings.
+ pt_walk: PtWalk,
+ /// Page table mapper for prepare/execute operations.
+ pt_map: PtMap,
/// Page table allocations required for mappings.
page_table_allocs: KVec<Pin<KBox<AllocatedBlocks>>>,
/// Maple tree allocator for virtual address range tracking.
virt_alloc: Pin<KBox<MapleTreeAlloc<()>>>,
/// Total number of pages in the virtual address space.
va_pages: usize,
+ /// Prepared PT pages pending PDE installation, keyed by `install_addr`.
+ ///
+ /// Populated during prepare phase and drained in execute phase. Shared by all
+ /// pending maps, preventing races on the same PDE slot.
+ pt_pages: RBTree<VramAddress, super::pagetable::map::PreparedPtPage>,
}
impl Vmm {
@@ -70,20 +165,16 @@ pub(crate) fn new(
Ok(Self {
pdb_addr,
- mmu_version,
+ pt_walk: PtWalk::new(pdb_addr, mmu_version),
+ pt_map: PtMap::new(pdb_addr, mmu_version),
page_table_allocs: KVec::new(),
virt_alloc,
va_pages,
+ pt_pages: RBTree::new(),
})
}
/// Allocate a contiguous virtual frame number range.
- ///
- /// # Arguments
- ///
- /// - `num_pages`: Number of pages to allocate.
- /// - `va_range`: `None` = allocate anywhere, `Some(range)` = constrain allocation to the given
- /// range.
fn alloc_vfn_range(
&self,
num_pages: usize,
@@ -128,11 +219,152 @@ pub(super) fn read_mapping(
mm: &GpuMm,
vfn: Vfn,
) -> Result<Option<Pfn>> {
- let walker = PtWalk::new(self.pdb_addr, self.mmu_version);
-
- match walker.walk_to_pte(dev, mm, vfn)? {
+ match self.pt_walk.walk_to_pte(dev, mm, vfn)? {
WalkResult::Mapped { pfn, .. } => Ok(Some(pfn)),
WalkResult::Unmapped { .. } | WalkResult::PageTableMissing => Ok(None),
}
}
+
+ /// Prepare resources for mapping `num_pages` pages.
+ ///
+ /// Allocates a contiguous VA range, then walks the hierarchy per-VFN to prepare pages
+ /// for all missing PDEs. Returns a [`PreparedMapping`] with the VA allocation.
+ ///
+ /// If `va_range` is not `None`, the VA range is constrained to the given range. Safe
+ /// to call outside the fence signalling critical path.
+ pub(crate) fn prepare_map(
+ &mut self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ num_pages: usize,
+ va_range: Option<Range<u64>>,
+ ) -> Result<PreparedMapping> {
+ if num_pages == 0 {
+ return Err(EINVAL);
+ }
+
+ // Allocate contiguous VA range.
+ let vfn_start = self.alloc_vfn_range(num_pages, va_range)?;
+
+ if let Err(e) = self.pt_map.prepare_map(
+ dev,
+ mm,
+ vfn_start,
+ num_pages,
+ &mut self.page_table_allocs,
+ &mut self.pt_pages,
+ ) {
+ self.free_vfn(vfn_start);
+ return Err(e);
+ }
+
+ Ok(PreparedMapping {
+ vfn_start,
+ num_pages,
+ _drop_guard: MustExecuteGuard::new(),
+ })
+ }
+
+ /// Execute a prepared multi-page mapping.
+ ///
+ /// Installs all prepared PDEs and writes PTEs into the page table, then flushes TLB.
+ pub(crate) fn execute_map(
+ &mut self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ prepared: PreparedMapping,
+ pfns: &[Pfn],
+ writable: bool,
+ ) -> Result<MappedRange> {
+ if pfns.len() != prepared.num_pages {
+ self.free_vfn(prepared.vfn_start);
+ return Err(EINVAL);
+ }
+
+ let PreparedMapping {
+ vfn_start,
+ num_pages,
+ _drop_guard,
+ } = prepared;
+ _drop_guard.disarm();
+
+ if let Err(e) = self.pt_map.install_mappings(
+ dev,
+ mm,
+ &mut self.pt_pages,
+ &mut self.page_table_allocs,
+ vfn_start,
+ pfns,
+ writable,
+ ) {
+ self.free_vfn(vfn_start);
+ return Err(e);
+ }
+
+ Ok(MappedRange {
+ vfn_start,
+ num_pages,
+ _drop_guard: MustUnmapGuard::new(),
+ })
+ }
+
+ /// Map pages doing prepare and execute in the same call.
+ ///
+ /// This is a convenience wrapper for callers outside the fence signalling critical
+ /// path (e.g., BAR mappings). For DRM usecases, [`Vmm::prepare_map()`] and
+ /// [`Vmm::execute_map()`] will be called separately.
+ pub(crate) fn map_pages(
+ &mut self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ pfns: &[Pfn],
+ va_range: Option<Range<u64>>,
+ writable: bool,
+ ) -> Result<MappedRange> {
+ if pfns.is_empty() {
+ return Err(EINVAL);
+ }
+
+ // Check if provided VA range is sufficient (if provided).
+ if let Some(ref range) = va_range {
+ let required: u64 = pfns
+ .len()
+ .checked_mul(PAGE_SIZE)
+ .ok_or(EOVERFLOW)?
+ .into_safe_cast();
+ let available = range.end.checked_sub(range.start).ok_or(EINVAL)?;
+ if available < required {
+ return Err(EINVAL);
+ }
+ }
+
+ let prepared = self.prepare_map(dev, mm, pfns.len(), va_range)?;
+ self.execute_map(dev, mm, prepared, pfns, writable)
+ }
+
+ /// Unmap all pages in a [`MappedRange`] with a single TLB flush.
+ pub(crate) fn unmap_pages(
+ &mut self,
+ dev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ range: MappedRange,
+ ) -> Result {
+ let result = self
+ .pt_map
+ .invalidate_ptes(dev, mm, range.vfn_start, range.num_pages);
+
+ // TODO: Internal page table pages (PDE, PTE pages) are still kept around.
+ // This is by design as repeated maps/unmaps will be fast. As a future TODO,
+ // we can add a reclaimer here to reclaim if VRAM is short. For now, the PT
+ // pages are dropped once the `Vmm` is dropped.
+
+ // Free the VA range regardless of PTE invalidation success, so that the VA
+ // range is recovered even on failure (PTEs may be stale, but that is better
+ // than leaking both PTEs and VA range).
+ self.free_vfn(range.vfn_start);
+
+ // Unmap complete, safe to drop `MappedRange`.
+ range._drop_guard.disarm();
+ result
+ }
}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 17/22] gpu: nova-core: Add BAR1 aperture type and size constant
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (15 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 16/22] gpu: nova-core: mm: Add multi-page mapping API " Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 18/22] gpu: nova-core: mm: Add BAR1 user interface Joel Fernandes
` (4 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add BAR1_SIZE constant and Bar1 type alias for the 256MB BAR1 aperture.
These are prerequisites for BAR1 memory access functionality.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Co-developed-by: Zhi Wang <zhiw@nvidia.com>
Signed-off-by: Zhi Wang <zhiw@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/driver.rs | 2 ++
drivers/gpu/nova-core/gsp/commands.rs | 4 ++++
drivers/gpu/nova-core/gsp/fw/commands.rs | 8 ++++++++
3 files changed, 14 insertions(+)
diff --git a/drivers/gpu/nova-core/driver.rs b/drivers/gpu/nova-core/driver.rs
index 84b0e1703150..597343d5da54 100644
--- a/drivers/gpu/nova-core/driver.rs
+++ b/drivers/gpu/nova-core/driver.rs
@@ -47,6 +47,8 @@ pub(crate) struct NovaCore {
const GPU_DMA_BITS: u32 = 47;
pub(crate) type Bar0 = pci::Bar<BAR0_SIZE>;
+#[expect(dead_code)]
+pub(crate) type Bar1 = pci::Bar;
kernel::pci_device_table!(
PCI_TABLE,
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index eeecf81a0ffd..9bf0d32c6a7f 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -193,6 +193,9 @@ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
/// The reply from the GSP to the [`GetGspStaticInfo`] command.
pub(crate) struct GetGspStaticInfoReply {
gpu_name: [u8; 64],
+ /// BAR1 Page Directory Entry base address.
+ #[expect(dead_code)]
+ pub(crate) bar1_pde_base: u64,
/// Usable FB (VRAM) region for driver memory allocation.
pub(crate) usable_fb_region: Range<u64>,
/// End of VRAM.
@@ -212,6 +215,7 @@ fn read(
Ok(GetGspStaticInfoReply {
gpu_name: msg.gpu_name_str(),
+ bar1_pde_base: msg.bar1_pde_base(),
usable_fb_region: msg.first_usable_fb_region().ok_or(ENODEV)?,
total_fb_end,
})
diff --git a/drivers/gpu/nova-core/gsp/fw/commands.rs b/drivers/gpu/nova-core/gsp/fw/commands.rs
index 8619f08a689d..a90ff9802304 100644
--- a/drivers/gpu/nova-core/gsp/fw/commands.rs
+++ b/drivers/gpu/nova-core/gsp/fw/commands.rs
@@ -127,6 +127,14 @@ impl GspStaticConfigInfo {
self.0.gpuNameString
}
+ /// Returns the BAR1 Page Directory Entry base address.
+ ///
+ /// This is the root page table address for BAR1 virtual memory,
+ /// set up by GSP-RM firmware.
+ pub(crate) fn bar1_pde_base(&self) -> u64 {
+ self.0.bar1PdeBase
+ }
+
/// Returns an iterator over valid FB regions from GSP firmware data.
fn fb_regions(
&self,
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 18/22] gpu: nova-core: mm: Add BAR1 user interface
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (16 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 17/22] gpu: nova-core: Add BAR1 aperture type and size constant Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 19/22] gpu: nova-core: mm: Add BAR1 memory management self-tests Joel Fernandes
` (3 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add the BAR1 user interface for CPU access to GPU virtual memory through
the BAR1 aperture.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/driver.rs | 1 -
drivers/gpu/nova-core/gpu.rs | 21 +++-
drivers/gpu/nova-core/gsp/commands.rs | 1 -
drivers/gpu/nova-core/mm.rs | 1 +
drivers/gpu/nova-core/mm/bar_user.rs | 156 ++++++++++++++++++++++++++
5 files changed, 177 insertions(+), 3 deletions(-)
create mode 100644 drivers/gpu/nova-core/mm/bar_user.rs
diff --git a/drivers/gpu/nova-core/driver.rs b/drivers/gpu/nova-core/driver.rs
index 597343d5da54..e78a682a7f2a 100644
--- a/drivers/gpu/nova-core/driver.rs
+++ b/drivers/gpu/nova-core/driver.rs
@@ -47,7 +47,6 @@ pub(crate) struct NovaCore {
const GPU_DMA_BITS: u32 = 47;
pub(crate) type Bar0 = pci::Bar<BAR0_SIZE>;
-#[expect(dead_code)]
pub(crate) type Bar1 = pci::Bar;
kernel::pci_device_table!(
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index fce05e9d1a89..73a48dba391d 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -28,7 +28,12 @@
commands::GetGspStaticInfoReply,
Gsp, //
},
- mm::GpuMm,
+ mm::{
+ bar_user::BarUser,
+ pagetable::MmuVersion,
+ GpuMm,
+ VramAddress, //
+ },
regs,
};
@@ -122,6 +127,11 @@ pub(crate) const fn arch(self) -> Architecture {
pub(crate) const fn needs_fwsec_bootloader(self) -> bool {
matches!(self.arch(), Architecture::Turing) || matches!(self, Self::GA100)
}
+
+ /// Returns the MMU version for this chipset.
+ pub(crate) fn mmu_version(self) -> MmuVersion {
+ MmuVersion::from(self.arch())
+ }
}
// TODO
@@ -249,6 +259,8 @@ pub(crate) struct Gpu {
#[pin]
gsp: Gsp,
gsp_static_info: GetGspStaticInfoReply,
+ /// BAR1 user interface for CPU access to GPU virtual memory.
+ bar_user: BarUser,
}
impl Gpu {
@@ -313,6 +325,13 @@ pub(crate) fn new<'a>(
}, pramin_vram_region)?
},
+ // Create BAR1 user interface for CPU access to GPU virtual memory.
+ bar_user: {
+ let pdb_addr = VramAddress::new(gsp_static_info.bar1_pde_base);
+ let bar1_size = pdev.resource_len(1)?;
+ BarUser::new(pdb_addr, spec.chipset, bar1_size)?
+ },
+
bar: devres_bar,
})
}
diff --git a/drivers/gpu/nova-core/gsp/commands.rs b/drivers/gpu/nova-core/gsp/commands.rs
index 9bf0d32c6a7f..32df0fe4b9c2 100644
--- a/drivers/gpu/nova-core/gsp/commands.rs
+++ b/drivers/gpu/nova-core/gsp/commands.rs
@@ -194,7 +194,6 @@ fn init(&self) -> impl Init<Self::Command, Self::InitError> {
pub(crate) struct GetGspStaticInfoReply {
gpu_name: [u8; 64],
/// BAR1 Page Directory Entry base address.
- #[expect(dead_code)]
pub(crate) bar1_pde_base: u64,
/// Usable FB (VRAM) region for driver memory allocation.
pub(crate) usable_fb_region: Range<u64>,
diff --git a/drivers/gpu/nova-core/mm.rs b/drivers/gpu/nova-core/mm.rs
index 88d0b436de4e..7b68028f1dad 100644
--- a/drivers/gpu/nova-core/mm.rs
+++ b/drivers/gpu/nova-core/mm.rs
@@ -32,6 +32,7 @@ macro_rules! impl_pfn_bounded {
};
}
+pub(crate) mod bar_user;
pub(super) mod pagetable;
pub(crate) mod pramin;
pub(super) mod tlb;
diff --git a/drivers/gpu/nova-core/mm/bar_user.rs b/drivers/gpu/nova-core/mm/bar_user.rs
new file mode 100644
index 000000000000..ab56ecd66fb9
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/bar_user.rs
@@ -0,0 +1,156 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! BAR1 user interface for CPU access to GPU virtual memory. Used for USERD
+//! for GPU work submission, and applications to access GPU buffers via mmap().
+
+use kernel::{
+ device,
+ io::Io,
+ prelude::*, //
+};
+
+use crate::{
+ driver::Bar1,
+ gpu::Chipset,
+ mm::{
+ vmm::{
+ MappedRange,
+ Vmm, //
+ },
+ GpuMm,
+ Pfn,
+ Vfn,
+ VirtualAddress,
+ VramAddress,
+ PAGE_SIZE, //
+ },
+ num::IntoSafeCast,
+};
+
+/// BAR1 user interface for virtual memory mappings.
+///
+/// Owns a [`Vmm`] instance with virtual address tracking and provides
+/// BAR1-specific mapping and cleanup operations.
+pub(crate) struct BarUser {
+ vmm: Vmm,
+}
+
+impl BarUser {
+ /// Create a new [`BarUser`] with virtual address tracking.
+ pub(crate) fn new(pdb_addr: VramAddress, chipset: Chipset, va_size: u64) -> Result<Self> {
+ Ok(Self {
+ vmm: Vmm::new(pdb_addr, chipset.mmu_version(), va_size)?,
+ })
+ }
+
+ /// Map physical pages to a contiguous BAR1 virtual range.
+ pub(crate) fn map<'a>(
+ &'a mut self,
+ dev: &'a device::Device<device::Bound>,
+ mm: &'a GpuMm,
+ bar: &'a Bar1,
+ pfns: &[Pfn],
+ writable: bool,
+ ) -> Result<BarUserAccess<'a>> {
+ if pfns.is_empty() {
+ return Err(EINVAL);
+ }
+
+ let mapped = self.vmm.map_pages(dev, mm, pfns, None, writable)?;
+
+ Ok(BarUserAccess {
+ vmm: &mut self.vmm,
+ dev,
+ mm,
+ bar,
+ mapped: Some(mapped),
+ })
+ }
+}
+
+/// Access object for a mapped BAR1 region.
+///
+/// Wraps a [`MappedRange`] and provides BAR1 access. When dropped,
+/// unmaps pages and releases the VA range (by passing the range to
+/// [`Vmm::unmap_pages()`], which consumes it).
+pub(crate) struct BarUserAccess<'a> {
+ vmm: &'a mut Vmm,
+ dev: &'a device::Device<device::Bound>,
+ mm: &'a GpuMm,
+ bar: &'a Bar1,
+ /// Needs to be an `Option` so that we can `take()` it and call `Drop`
+ /// on it in [`Vmm::unmap_pages()`].
+ mapped: Option<MappedRange>,
+}
+
+impl<'a> BarUserAccess<'a> {
+ /// Returns the active mapping.
+ fn mapped(&self) -> &MappedRange {
+ // `mapped` is only `None` after `take()` in `Drop`; accessors are
+ // never called from within `Drop`, so `unwrap()` never panics.
+ self.mapped.as_ref().unwrap()
+ }
+
+ /// Get the base virtual address of this mapping.
+ pub(crate) fn base(&self) -> VirtualAddress {
+ VirtualAddress::from(self.mapped().vfn_start)
+ }
+
+ /// Get the total size of the mapped region in bytes.
+ pub(crate) fn size(&self) -> usize {
+ self.mapped().num_pages * PAGE_SIZE
+ }
+
+ /// Get the starting virtual frame number.
+ pub(crate) fn vfn_start(&self) -> Vfn {
+ self.mapped().vfn_start
+ }
+
+ /// Get the number of pages in this mapping.
+ pub(crate) fn num_pages(&self) -> usize {
+ self.mapped().num_pages
+ }
+
+ /// Translate an offset within this mapping to a BAR1 aperture offset.
+ fn bar_offset(&self, offset: usize) -> Result<usize> {
+ if offset >= self.size() {
+ return Err(EINVAL);
+ }
+
+ let base_vfn: usize = self.mapped().vfn_start.raw().into_safe_cast();
+ let base = base_vfn.checked_mul(PAGE_SIZE).ok_or(EOVERFLOW)?;
+ base.checked_add(offset).ok_or(EOVERFLOW)
+ }
+
+ // Fallible accessors with runtime bounds checking.
+
+ /// Read a 32-bit value at the given offset.
+ pub(crate) fn try_read32(&self, offset: usize) -> Result<u32> {
+ self.bar.try_read32(self.bar_offset(offset)?)
+ }
+
+ /// Write a 32-bit value at the given offset.
+ pub(crate) fn try_write32(&self, value: u32, offset: usize) -> Result {
+ self.bar.try_write32(value, self.bar_offset(offset)?)
+ }
+
+ /// Read a 64-bit value at the given offset.
+ pub(crate) fn try_read64(&self, offset: usize) -> Result<u64> {
+ self.bar.try_read64(self.bar_offset(offset)?)
+ }
+
+ /// Write a 64-bit value at the given offset.
+ pub(crate) fn try_write64(&self, value: u64, offset: usize) -> Result {
+ self.bar.try_write64(value, self.bar_offset(offset)?)
+ }
+}
+
+impl Drop for BarUserAccess<'_> {
+ fn drop(&mut self) {
+ if let Some(mapped) = self.mapped.take() {
+ if self.vmm.unmap_pages(self.dev, self.mm, mapped).is_err() {
+ kernel::pr_warn_once!("BarUserAccess: unmap_pages failed.\n");
+ }
+ }
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 19/22] gpu: nova-core: mm: Add BAR1 memory management self-tests
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (17 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 18/22] gpu: nova-core: mm: Add BAR1 user interface Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 20/22] gpu: nova-core: mm: Add PRAMIN aperture self-tests Joel Fernandes
` (2 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add self-tests for BAR1 access during driver probe when
CONFIG_NOVA_MM_SELFTESTS is enabled (default disabled). This results in
testing the Vmm, GPU buddy allocator and BAR1 region all of which should
function correctly for the tests to pass.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/Kconfig | 10 ++
drivers/gpu/nova-core/driver.rs | 2 +
drivers/gpu/nova-core/gpu.rs | 31 ++++
drivers/gpu/nova-core/mm/bar_user.rs | 244 ++++++++++++++++++++++++++
drivers/gpu/nova-core/mm/pagetable.rs | 35 +++-
5 files changed, 321 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/Kconfig b/drivers/gpu/nova-core/Kconfig
index 6513007bf66f..35de55aabcfc 100644
--- a/drivers/gpu/nova-core/Kconfig
+++ b/drivers/gpu/nova-core/Kconfig
@@ -15,3 +15,13 @@ config NOVA_CORE
This driver is work in progress and may not be functional.
If M is selected, the module will be called nova_core.
+
+config NOVA_MM_SELFTESTS
+ bool "Memory management self-tests"
+ depends on NOVA_CORE
+ help
+ Enable self-tests for the memory management subsystem. When enabled,
+ tests are run during GPU probe to verify PRAMIN aperture access,
+ page table walking, and BAR1 virtual memory mapping functionality.
+
+ This is a testing option and is default-disabled.
diff --git a/drivers/gpu/nova-core/driver.rs b/drivers/gpu/nova-core/driver.rs
index e78a682a7f2a..6f95f8672158 100644
--- a/drivers/gpu/nova-core/driver.rs
+++ b/drivers/gpu/nova-core/driver.rs
@@ -97,6 +97,8 @@ fn probe(pdev: &pci::Device<Core>, _info: &Self::IdInfo) -> impl PinInit<Self, E
Ok(try_pin_init!(Self {
gpu <- Gpu::new(pdev, bar.clone(), bar.access(pdev.as_ref())?),
+ // Run optional GPU selftests.
+ _: { gpu.run_selftests(pdev)? },
_reg <- auxiliary::Registration::new(
pdev.as_ref(),
c"nova-drm",
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index 73a48dba391d..e94f5374eb67 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -346,4 +346,35 @@ pub(crate) fn unbind(&self, dev: &device::Device<device::Core>) {
.inspect(|bar| self.sysmem_flush.unregister(bar))
.is_err());
}
+
+ /// Run selftests on the constructed [`Gpu`].
+ pub(crate) fn run_selftests(
+ mut self: Pin<&mut Self>,
+ pdev: &pci::Device<device::Bound>,
+ ) -> Result {
+ self.as_mut().run_mm_selftests(pdev)?;
+ Ok(())
+ }
+
+ #[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+ fn run_mm_selftests(self: Pin<&mut Self>, pdev: &pci::Device<device::Bound>) -> Result {
+ // BAR1 self-tests.
+ let bar1 = Arc::pin_init(pdev.iomap_region(1, c"nova-core/bar1"), GFP_KERNEL)?;
+ let bar1_access = bar1.access(pdev.as_ref())?;
+
+ crate::mm::bar_user::run_self_test(
+ pdev.as_ref(),
+ &self.mm,
+ bar1_access,
+ self.gsp_static_info.bar1_pde_base,
+ self.spec.chipset,
+ )?;
+
+ Ok(())
+ }
+
+ #[cfg(not(CONFIG_NOVA_MM_SELFTESTS))]
+ fn run_mm_selftests(self: Pin<&mut Self>, _pdev: &pci::Device<device::Bound>) -> Result {
+ Ok(())
+ }
}
diff --git a/drivers/gpu/nova-core/mm/bar_user.rs b/drivers/gpu/nova-core/mm/bar_user.rs
index ab56ecd66fb9..8d996e7eaebe 100644
--- a/drivers/gpu/nova-core/mm/bar_user.rs
+++ b/drivers/gpu/nova-core/mm/bar_user.rs
@@ -154,3 +154,247 @@ fn drop(&mut self) {
}
}
}
+
+/// Run MM subsystem self-tests during probe.
+///
+/// Tests page table infrastructure and `BAR1` MMIO access using the `BAR1`
+/// address space. Uses the `GpuMm`'s buddy allocator to allocate page tables
+/// and test pages as needed.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+pub(crate) fn run_self_test(
+ pdev: &device::Device<device::Bound>,
+ mm: &GpuMm,
+ bar1: &Bar1,
+ bar1_pdb: u64,
+ chipset: Chipset,
+) -> Result {
+ use kernel::gpu::buddy::{
+ GpuBuddyAllocFlags,
+ GpuBuddyAllocMode, //
+ };
+ use kernel::ptr::Alignment;
+ use kernel::sizes::{
+ SZ_16K,
+ SZ_32K,
+ SZ_4K,
+ SZ_64K, //
+ };
+
+ // Test patterns.
+ const PATTERN_PRAMIN: u32 = 0xDEAD_BEEF;
+ const PATTERN_BAR1: u32 = 0xCAFE_BABE;
+
+ let dev = pdev;
+ dev_info!(dev, "MM: Starting self-test...\n");
+
+ let pdb_addr = VramAddress::new(bar1_pdb);
+
+ // Check if initial page tables are in VRAM.
+ if crate::mm::pagetable::check_pdb_valid(pdev, mm.pramin(), pdb_addr, chipset).is_err() {
+ dev_info!(dev, "MM: Self-test SKIPPED - no valid VRAM page tables\n");
+ return Ok(());
+ }
+
+ // Set up a test page from the buddy allocator.
+ let test_page_blocks = KBox::pin_init(
+ mm.buddy().alloc_blocks(
+ GpuBuddyAllocMode::Simple,
+ SZ_4K.into_safe_cast(),
+ Alignment::new::<SZ_4K>(),
+ GpuBuddyAllocFlags::default(),
+ ),
+ GFP_KERNEL,
+ )?;
+ let test_vram_offset = test_page_blocks.iter().next().ok_or(ENOMEM)?.offset();
+ let test_vram = VramAddress::new(test_vram_offset);
+ let test_pfn = Pfn::from(test_vram);
+
+ // Create a VMM of size 64K to track virtual memory mappings.
+ let mut vmm = Vmm::new(pdb_addr, chipset.mmu_version(), SZ_64K.into_safe_cast())?;
+
+ // Create a test mapping.
+ let mapped = vmm.map_pages(pdev, mm, &[test_pfn], None, true)?;
+ let test_vfn = mapped.vfn_start;
+
+ // Pre-compute test addresses for the PRAMIN to BAR1 read test.
+ let vfn_offset: usize = test_vfn.raw().into_safe_cast();
+ let bar1_base_offset = vfn_offset.checked_mul(PAGE_SIZE).ok_or(EOVERFLOW)?;
+ let bar1_read_offset: usize = bar1_base_offset + 0x100;
+ let vram_read_addr: usize = test_vram.raw() + 0x100;
+
+ // Test 1: Write via PRAMIN, read via BAR1.
+ {
+ let mut window = mm.pramin().get_window(pdev)?;
+ window.try_write32(vram_read_addr, PATTERN_PRAMIN)?;
+ }
+
+ // Read back via BAR1 aperture.
+ let bar1_value = bar1.try_read32(bar1_read_offset)?;
+
+ let test1_passed = if bar1_value == PATTERN_PRAMIN {
+ true
+ } else {
+ dev_err!(
+ dev,
+ "MM: Test 1 FAILED - Expected {:#010x}, got {:#010x}\n",
+ PATTERN_PRAMIN,
+ bar1_value
+ );
+ false
+ };
+
+ // Cleanup - invalidate PTE.
+ vmm.unmap_pages(pdev, mm, mapped)?;
+
+ // Test 2: Two-phase prepare/execute API.
+ let prepared = vmm.prepare_map(pdev, mm, 1, None)?;
+ let mapped2 = vmm.execute_map(pdev, mm, prepared, &[test_pfn], true)?;
+ let readback = vmm.read_mapping(pdev, mm, mapped2.vfn_start)?;
+ let test2_passed = if readback == Some(test_pfn) {
+ true
+ } else {
+ dev_err!(dev, "MM: Test 2 FAILED - Two-phase map readback mismatch\n");
+ false
+ };
+ vmm.unmap_pages(pdev, mm, mapped2)?;
+
+ // Test 3: Range-constrained allocation with a hole — exercises block.size()-driven
+ // BAR1 mapping. A 4K hole is punched at base+16K, then a single 32K allocation
+ // is requested within [base, base+36K). The buddy allocator must split around the
+ // hole, returning multiple blocks (expected: {16K, 4K, 8K, 4K} = 32K total).
+ // Each block is mapped into BAR1 and verified via PRAMIN read-back.
+ //
+ // Address layout (base = 0x10000):
+ // [ 16K ] [HOLE 4K] [4K] [ 8K ] [4K]
+ // 0x10000 0x14000 0x15000 0x16000 0x18000 0x19000
+ let range_base: u64 = SZ_64K.into_safe_cast();
+ let sz_4k: u64 = SZ_4K.into_safe_cast();
+ let sz_16k: u64 = SZ_16K.into_safe_cast();
+ let sz_32k_4k: u64 = (SZ_32K + SZ_4K).into_safe_cast();
+
+ // Punch a 4K hole at base+16K so the subsequent 32K allocation must split.
+ let _hole = KBox::pin_init(
+ mm.buddy().alloc_blocks(
+ GpuBuddyAllocMode::Range(range_base + sz_16k..range_base + sz_16k + sz_4k),
+ SZ_4K.into_safe_cast(),
+ Alignment::new::<SZ_4K>(),
+ GpuBuddyAllocFlags::default(),
+ ),
+ GFP_KERNEL,
+ )?;
+
+ // Allocate 32K within [base, base+36K). The hole forces the allocator to return
+ // split blocks whose sizes are determined by buddy alignment.
+ let blocks = KBox::pin_init(
+ mm.buddy().alloc_blocks(
+ GpuBuddyAllocMode::Range(range_base..range_base + sz_32k_4k),
+ SZ_32K.into_safe_cast(),
+ Alignment::new::<SZ_4K>(),
+ GpuBuddyAllocFlags::default(),
+ ),
+ GFP_KERNEL,
+ )?;
+
+ let mut test3_passed = true;
+ let mut total_size = 0usize;
+
+ for block in blocks.iter() {
+ total_size += IntoSafeCast::<usize>::into_safe_cast(block.size());
+
+ // Map all pages of this block.
+ let page_size: u64 = PAGE_SIZE.into_safe_cast();
+ let num_pages: usize = (block.size() / page_size).into_safe_cast();
+
+ let mut pfns = KVec::new();
+ for j in 0..num_pages {
+ let j_u64: u64 = j.into_safe_cast();
+ pfns.push(
+ Pfn::from(VramAddress::new(
+ block.offset() + j_u64.checked_mul(page_size).ok_or(EOVERFLOW)?,
+ )),
+ GFP_KERNEL,
+ )?;
+ }
+
+ let mapped = vmm.map_pages(pdev, mm, &pfns, None, true)?;
+ let bar1_base_vfn: usize = mapped.vfn_start.raw().into_safe_cast();
+ let bar1_base = bar1_base_vfn.checked_mul(PAGE_SIZE).ok_or(EOVERFLOW)?;
+
+ for j in 0..num_pages {
+ let page_bar1_off = bar1_base + j * PAGE_SIZE;
+ let j_u64: u64 = j.into_safe_cast();
+ let page_phys = block.offset()
+ + j_u64
+ .checked_mul(PAGE_SIZE.into_safe_cast())
+ .ok_or(EOVERFLOW)?;
+
+ bar1.try_write32(PATTERN_BAR1, page_bar1_off)?;
+
+ let pramin_val = {
+ let mut window = mm.pramin().get_window(pdev)?;
+ window.try_read32(page_phys.into_safe_cast())?
+ };
+
+ if pramin_val != PATTERN_BAR1 {
+ dev_err!(
+ dev,
+ "MM: Test 3 FAILED block offset {:#x} page {} (val={:#x})\n",
+ block.offset(),
+ j,
+ pramin_val
+ );
+ test3_passed = false;
+ }
+ }
+
+ vmm.unmap_pages(pdev, mm, mapped)?;
+ }
+
+ // Verify aggregate: all returned block sizes must sum to allocation size.
+ if total_size != SZ_32K {
+ dev_err!(
+ dev,
+ "MM: Test 3 FAILED - total size {} != expected {}\n",
+ total_size,
+ SZ_32K
+ );
+ test3_passed = false;
+ }
+
+ // Release Tests 1-3's Vmm before Test 4 constructs a fresh BarUser on
+ // the same PDB.
+ drop(vmm);
+
+ // Test 4: Exercise `BarUser::map()` end-to-end.
+ let mut bar_user = BarUser::new(pdb_addr, chipset, SZ_64K.into_safe_cast())?;
+ let test4_passed = {
+ let access = bar_user.map(pdev, mm, bar1, &[test_pfn], true)?;
+
+ // Write pattern via PRAMIN, read via BarUserAccess.
+ {
+ let mut window = mm.pramin().get_window(pdev)?;
+ window.try_write32(test_vram.raw(), PATTERN_BAR1)?;
+ }
+
+ let readback = access.try_read32(0)?;
+ if readback == PATTERN_BAR1 {
+ true
+ } else {
+ dev_err!(
+ dev,
+ "MM: Test 4 FAILED - Expected {:#010x}, got {:#010x}\n",
+ PATTERN_BAR1,
+ readback
+ );
+ false
+ }
+ };
+
+ if test1_passed && test2_passed && test3_passed && test4_passed {
+ dev_info!(dev, "MM: All self-tests PASSED\n");
+ Ok(())
+ } else {
+ dev_err!(dev, "MM: Self-tests FAILED\n");
+ Err(EIO)
+ }
+}
diff --git a/drivers/gpu/nova-core/mm/pagetable.rs b/drivers/gpu/nova-core/mm/pagetable.rs
index 4070070922a4..8fd179ce7476 100644
--- a/drivers/gpu/nova-core/mm/pagetable.rs
+++ b/drivers/gpu/nova-core/mm/pagetable.rs
@@ -22,7 +22,10 @@
VirtualAddress,
VramAddress, //
};
-use kernel::prelude::*;
+use kernel::{
+ device,
+ prelude::*, //
+};
/// Extracts the page table index at a given level from a virtual address.
pub(super) trait VaLevelIndex {
@@ -383,3 +386,33 @@ fn from(val: AperturePde) -> Self {
Bounded::from_expr(val as u64 & 0x3)
}
}
+
+/// Check if the PDB has valid, VRAM-backed page tables.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn check_pdb_inner<M: MmuConfig>(
+ dev: &device::Device<device::Bound>,
+ pramin: &pramin::Pramin,
+ pdb_addr: VramAddress,
+) -> Result {
+ let mut window = pramin.get_window(dev)?;
+ let raw = window.try_read64(pdb_addr.raw())?;
+
+ if !M::Pde::new(raw).is_valid_vram() {
+ return Err(ENOENT);
+ }
+ Ok(())
+}
+
+/// Check if the PDB has valid, VRAM-backed page tables, dispatching by MMU version.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+pub(super) fn check_pdb_valid(
+ dev: &device::Device<device::Bound>,
+ pramin: &pramin::Pramin,
+ pdb_addr: VramAddress,
+ chipset: crate::gpu::Chipset,
+) -> Result {
+ match MmuVersion::from(chipset.arch()) {
+ MmuVersion::V2 => check_pdb_inner::<MmuV2>(dev, pramin, pdb_addr),
+ MmuVersion::V3 => check_pdb_inner::<MmuV3>(dev, pramin, pdb_addr),
+ }
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 20/22] gpu: nova-core: mm: Add PRAMIN aperture self-tests
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (18 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 19/22] gpu: nova-core: mm: Add BAR1 memory management self-tests Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 21/22] gpu: nova-core: mm: pramin: drop useless as_ref() in run_self_test Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 22/22] rust: maple_tree: implement Send and Sync for MapleTree Joel Fernandes
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
Add self-tests for the PRAMIN aperture mechanism to verify correct
operation during GPU probe. The tests validate various alignment
requirements and corner cases.
The tests are default disabled and behind CONFIG_NOVA_MM_SELFTESTS.
When enabled, tests run after GSP boot during probe.
Cc: Nikola Djukic <ndjukic@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/gpu.rs | 3 +
drivers/gpu/nova-core/mm/pramin.rs | 211 +++++++++++++++++++++++++++++
2 files changed, 214 insertions(+)
diff --git a/drivers/gpu/nova-core/gpu.rs b/drivers/gpu/nova-core/gpu.rs
index e94f5374eb67..b3c33bebd8d6 100644
--- a/drivers/gpu/nova-core/gpu.rs
+++ b/drivers/gpu/nova-core/gpu.rs
@@ -358,6 +358,9 @@ pub(crate) fn run_selftests(
#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
fn run_mm_selftests(self: Pin<&mut Self>, pdev: &pci::Device<device::Bound>) -> Result {
+ // PRAMIN aperture self-tests.
+ crate::mm::pramin::run_self_test(pdev.as_ref(), self.mm.pramin(), self.spec.chipset)?;
+
// BAR1 self-tests.
let bar1 = Arc::pin_init(pdev.iomap_region(1, c"nova-core/bar1"), GFP_KERNEL)?;
let bar1_access = bar1.access(pdev.as_ref())?;
diff --git a/drivers/gpu/nova-core/mm/pramin.rs b/drivers/gpu/nova-core/mm/pramin.rs
index cc393f1509d7..6a62cb5aaf53 100644
--- a/drivers/gpu/nova-core/mm/pramin.rs
+++ b/drivers/gpu/nova-core/mm/pramin.rs
@@ -201,6 +201,11 @@ pub(crate) fn new(
}))
}
+ /// Returns the valid VRAM region for this PRAMIN instance.
+ fn vram_region(&self) -> &Range<u64> {
+ &self.vram_region
+ }
+
/// Acquire exclusive PRAMIN access.
///
/// Returns a [`PraminWindow`] guard that provides VRAM read/write accessors.
@@ -302,3 +307,209 @@ fn compute_window(
define_pramin_write!(try_write32, u32);
define_pramin_write!(try_write64, u64);
}
+
+/// Offset within the VRAM region to use as the self-test area.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+const SELFTEST_REGION_OFFSET: usize = 0x1000;
+
+/// Test read/write at byte-aligned locations.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn test_byte_readwrite(
+ dev: &kernel::device::Device,
+ win: &mut PraminWindow<'_>,
+ base: usize,
+) -> Result {
+ for i in 0u8..4 {
+ let offset = base + 1 + usize::from(i);
+ let val = 0xA0 + i;
+ win.try_write8(offset, val)?;
+ let read_val = win.try_read8(offset)?;
+ if read_val != val {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: wrote {:#x}, read {:#x}\n",
+ offset,
+ val,
+ read_val
+ );
+ return Err(EIO);
+ }
+ }
+ Ok(())
+}
+
+/// Test writing a `u32` and reading back as individual `u8`s.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn test_u32_as_bytes(
+ dev: &kernel::device::Device,
+ win: &mut PraminWindow<'_>,
+ base: usize,
+) -> Result {
+ let offset = base + 0x10;
+ let val: u32 = 0xDEADBEEF;
+ win.try_write32(offset, val)?;
+
+ // Read back as individual bytes (little-endian: EF BE AD DE).
+ let expected_bytes: [u8; 4] = [0xEF, 0xBE, 0xAD, 0xDE];
+ for (i, &expected) in expected_bytes.iter().enumerate() {
+ let read_val = win.try_read8(offset + i)?;
+ if read_val != expected {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: expected {:#x}, read {:#x}\n",
+ offset + i,
+ expected,
+ read_val
+ );
+ return Err(EIO);
+ }
+ }
+ Ok(())
+}
+
+/// Test window repositioning across 1MB boundaries.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn test_window_reposition(
+ dev: &kernel::device::Device,
+ win: &mut PraminWindow<'_>,
+ base: usize,
+) -> Result {
+ let offset_a: usize = base;
+ let offset_b: usize = base + 0x200000; // base + 2MB (different 1MB region).
+ let val_a: u32 = 0x11111111;
+ let val_b: u32 = 0x22222222;
+
+ win.try_write32(offset_a, val_a)?;
+ win.try_write32(offset_b, val_b)?;
+
+ let read_b = win.try_read32(offset_b)?;
+ if read_b != val_b {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: expected {:#x}, read {:#x}\n",
+ offset_b,
+ val_b,
+ read_b
+ );
+ return Err(EIO);
+ }
+
+ let read_a = win.try_read32(offset_a)?;
+ if read_a != val_a {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - offset {:#x}: expected {:#x}, read {:#x}\n",
+ offset_a,
+ val_a,
+ read_a
+ );
+ return Err(EIO);
+ }
+ Ok(())
+}
+
+/// Test that offsets outside the VRAM region are rejected.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn test_invalid_offset(
+ dev: &kernel::device::Device,
+ win: &mut PraminWindow<'_>,
+ vram_end: u64,
+) -> Result {
+ let invalid_offset: usize = vram_end.into_safe_cast();
+ let result = win.try_read32(invalid_offset);
+ if result.is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - read at invalid offset {:#x} should have failed\n",
+ invalid_offset
+ );
+ return Err(EIO);
+ }
+ Ok(())
+}
+
+/// Test that misaligned multi-byte accesses are rejected.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+fn test_misaligned_access(
+ dev: &kernel::device::Device,
+ win: &mut PraminWindow<'_>,
+ base: usize,
+) -> Result {
+ // `u16` at odd offset (not 2-byte aligned).
+ let offset_u16 = base + 0x21;
+ if win.try_write16(offset_u16, 0xABCD).is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - misaligned u16 write at {:#x} should have failed\n",
+ offset_u16
+ );
+ return Err(EIO);
+ }
+
+ // `u32` at 2-byte-aligned (not 4-byte-aligned) offset.
+ let offset_u32 = base + 0x32;
+ if win.try_write32(offset_u32, 0x12345678).is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - misaligned u32 write at {:#x} should have failed\n",
+ offset_u32
+ );
+ return Err(EIO);
+ }
+
+ // `u64` read at 4-byte-aligned (not 8-byte-aligned) offset.
+ let offset_u64 = base + 0x44;
+ if win.try_read64(offset_u64).is_ok() {
+ dev_err!(
+ dev,
+ "PRAMIN: FAIL - misaligned u64 read at {:#x} should have failed\n",
+ offset_u64
+ );
+ return Err(EIO);
+ }
+ Ok(())
+}
+
+/// Run PRAMIN self-tests during boot if self-tests are enabled.
+#[cfg(CONFIG_NOVA_MM_SELFTESTS)]
+pub(crate) fn run_self_test(
+ pdev: &device::Device<device::Bound>,
+ pramin: &Pramin,
+ chipset: crate::gpu::Chipset,
+) -> Result {
+ use crate::gpu::Architecture;
+
+ let dev = pdev.as_ref();
+
+ // PRAMIN uses NV_PBUS_BAR0_WINDOW which is only available on pre-Hopper GPUs.
+ // Hopper+ uses NV_XAL_EP_BAR0_WINDOW instead, requiring a separate HAL that
+ // has not been implemented yet.
+ if !matches!(
+ chipset.arch(),
+ Architecture::Turing | Architecture::Ampere | Architecture::Ada
+ ) {
+ dev_info!(
+ dev,
+ "PRAMIN: Skipping self-tests for {:?} (only pre-Hopper supported)\n",
+ chipset
+ );
+ return Ok(());
+ }
+
+ dev_info!(dev, "PRAMIN: Starting self-test...\n");
+
+ let vram_region = pramin.vram_region();
+ let base: usize = vram_region.start.into_safe_cast();
+ let base = base + SELFTEST_REGION_OFFSET;
+ let vram_end = vram_region.end;
+ let mut win = pramin.get_window(pdev)?;
+
+ test_byte_readwrite(dev, &mut win, base)?;
+ test_u32_as_bytes(dev, &mut win, base)?;
+ test_window_reposition(dev, &mut win, base)?;
+ test_invalid_offset(dev, &mut win, vram_end)?;
+ test_misaligned_access(dev, &mut win, base)?;
+
+ dev_info!(dev, "PRAMIN: All self-tests PASSED\n");
+ Ok(())
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 21/22] gpu: nova-core: mm: pramin: drop useless as_ref() in run_self_test
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (19 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 20/22] gpu: nova-core: mm: Add PRAMIN aperture self-tests Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
2026-04-25 21:14 ` [PATCH v12 22/22] rust: maple_tree: implement Send and Sync for MapleTree Joel Fernandes
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
`pdev` is already `&device::Device<device::Bound>`, so calling `.as_ref()`
on it just returns the same reference. Bind `dev` to `pdev` directly to
silence the `clippy::useless_asref` warning.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
drivers/gpu/nova-core/mm/pramin.rs | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/nova-core/mm/pramin.rs b/drivers/gpu/nova-core/mm/pramin.rs
index 6a62cb5aaf53..e967cababac2 100644
--- a/drivers/gpu/nova-core/mm/pramin.rs
+++ b/drivers/gpu/nova-core/mm/pramin.rs
@@ -479,7 +479,7 @@ pub(crate) fn run_self_test(
) -> Result {
use crate::gpu::Architecture;
- let dev = pdev.as_ref();
+ let dev = pdev;
// PRAMIN uses NV_PBUS_BAR0_WINDOW which is only available on pre-Hopper GPUs.
// Hopper+ uses NV_XAL_EP_BAR0_WINDOW instead, requiring a separate HAL that
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH v12 22/22] rust: maple_tree: implement Send and Sync for MapleTree
2026-04-25 21:14 [PATCH v12 00/22] gpu: nova-core: Add memory management support Joel Fernandes
` (20 preceding siblings ...)
2026-04-25 21:14 ` [PATCH v12 21/22] gpu: nova-core: mm: pramin: drop useless as_ref() in run_self_test Joel Fernandes
@ 2026-04-25 21:14 ` Joel Fernandes
21 siblings, 0 replies; 23+ messages in thread
From: Joel Fernandes @ 2026-04-25 21:14 UTC (permalink / raw)
To: linux-kernel
Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Bjorn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Dave Airlie, Daniel Almeida, dri-devel, rust-for-linux, nova-gpu,
Nikola Djukic, David Airlie, Boqun Feng, John Hubbard,
Alistair Popple, Timur Tabi, Edwin Peer, Alexandre Courbot,
Andrea Righi, Andy Ritger, Zhi Wang, Balbir Singh,
Philipp Stanner, alexeyi, Eliot Courtney, joel, linux-doc,
Joel Fernandes
The C maple_tree struct contains a *mut c_void, which prevents Rust from
auto-deriving Send/Sync. Following is an example error message when using
MapleTree in nova-core's Vmm.
This propagates up through MapleTreeAlloc to Vmm, BarUser, Gpu, and NovaCore,
causing NovaCore to fail the Send bound required by pci::Driver:
error[E0277]: `*mut c_void` cannot be sent between threads safely
--> drivers/gpu/nova-core/driver.rs:77:22
|
77 | impl pci::Driver for NovaCore {
| ^^^^^^^^ `*mut c_void` cannot be sent between threads safely
|
= help: within `MapleTreeAlloc<()>`, the trait `Send` is not implemented for `*mut c_void`
note: required because it appears within the type `kernel::bindings::maple_tree`
note: required because it appears within the type `Opaque<kernel::bindings::maple_tree>`
note: required because it appears within the type `MapleTree<()>`
note: required because it appears within the type `MapleTreeAlloc<()>`
= note: required for `Box<MapleTreeAlloc<()>, Kmalloc>` to implement `Send`
note: required because it appears within the type `core::pin::Pin<Box<MapleTreeAlloc<()>, Kmalloc>>`
note: required because it appears within the type `Vmm`
note: required because it appears within the type `BarUser`
note: required because it appears within the type `Gpu`
note: required because it appears within the type `NovaCore`
note: required by a bound in `kernel::pci::Driver`
--> rust/kernel/pci.rs:294:19
Implement Send and Sync for MapleTree. The tree contains no thread-local
state, and all shared access goes through the internal ma_lock spinlock.
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
rust/kernel/maple_tree.rs | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/rust/kernel/maple_tree.rs b/rust/kernel/maple_tree.rs
index 265d6396a78a..2400c905270d 100644
--- a/rust/kernel/maple_tree.rs
+++ b/rust/kernel/maple_tree.rs
@@ -16,7 +16,11 @@
alloc::Flags,
error::to_result,
prelude::*,
- types::{ForeignOwnable, Opaque},
+ types::{
+ ForeignOwnable,
+ NotThreadSafe,
+ Opaque, //
+ },
};
/// A maple tree optimized for storing non-overlapping ranges.
@@ -240,7 +244,10 @@ pub fn lock(&self) -> MapleGuard<'_, T> {
unsafe { bindings::spin_lock(self.ma_lock()) };
// INVARIANT: We just took the spinlock.
- MapleGuard(self)
+ MapleGuard {
+ tree: self,
+ _not_send: NotThreadSafe,
+ }
}
#[inline]
@@ -302,19 +309,29 @@ fn drop(mut self: Pin<&mut Self>) {
}
}
+// SAFETY: `MapleTree<T>` is `Send` if `T` is `Send` because `MapleTree` owns its elements.
+unsafe impl<T: ForeignOwnable + Send> Send for MapleTree<T> {}
+// SAFETY: `&MapleTree<T>` never hands out `&T`; all entry access is serialized
+// by `ma_lock` or `&mut Guard`, so `T: Send` suffices (`T: Sync` not required).
+unsafe impl<T: ForeignOwnable + Send> Sync for MapleTree<T> {}
+
/// A reference to a [`MapleTree`] that owns the inner lock.
///
/// # Invariants
///
/// This guard owns the inner spinlock.
#[must_use = "if unused, the lock will be immediately unlocked"]
-pub struct MapleGuard<'tree, T: ForeignOwnable>(&'tree MapleTree<T>);
+pub struct MapleGuard<'tree, T: ForeignOwnable> {
+ tree: &'tree MapleTree<T>,
+ // A held spinlock must be released on the same CPU that acquired it.
+ _not_send: NotThreadSafe,
+}
impl<'tree, T: ForeignOwnable> Drop for MapleGuard<'tree, T> {
#[inline]
fn drop(&mut self) {
// SAFETY: By the type invariants, we hold this spinlock.
- unsafe { bindings::spin_unlock(self.0.ma_lock()) };
+ unsafe { bindings::spin_unlock(self.tree.ma_lock()) };
}
}
@@ -323,7 +340,7 @@ impl<'tree, T: ForeignOwnable> MapleGuard<'tree, T> {
pub fn ma_state(&mut self, first: usize, end: usize) -> MaState<'_, T> {
// SAFETY: The `MaState` borrows this `MapleGuard`, so it can also borrow the `MapleGuard`s
// read/write permissions to the maple tree.
- unsafe { MaState::new_raw(self.0, first, end) }
+ unsafe { MaState::new_raw(self.tree, first, end) }
}
/// Load the value at the given index.
@@ -375,7 +392,7 @@ pub fn ma_state(&mut self, first: usize, end: usize) -> MaState<'_, T> {
#[inline]
pub fn load(&mut self, index: usize) -> Option<T::BorrowedMut<'_>> {
// SAFETY: `self.tree` contains a valid maple tree.
- let ret = unsafe { bindings::mtree_load(self.0.tree.get(), index) };
+ let ret = unsafe { bindings::mtree_load(self.tree.tree.get(), index) };
if ret.is_null() {
return None;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread