public inbox for linux-fbdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Joel Fernandes <joelagnelf@nvidia.com>
To: linux-kernel@vger.kernel.org
Cc: "Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	"Maxime Ripard" <mripard@kernel.org>,
	"Thomas Zimmermann" <tzimmermann@suse.de>,
	"David Airlie" <airlied@gmail.com>,
	"Simona Vetter" <simona@ffwll.ch>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Jani Nikula" <jani.nikula@linux.intel.com>,
	"Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>,
	"Rodrigo Vivi" <rodrigo.vivi@intel.com>,
	"Tvrtko Ursulin" <tursulin@ursulin.net>,
	"Huang Rui" <ray.huang@amd.com>,
	"Matthew Auld" <matthew.auld@intel.com>,
	"Matthew Brost" <matthew.brost@intel.com>,
	"Lucas De Marchi" <lucas.demarchi@intel.com>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	"Helge Deller" <deller@gmx.de>,
	"Danilo Krummrich" <dakr@kernel.org>,
	"Alice Ryhl" <aliceryhl@google.com>,
	"Miguel Ojeda" <ojeda@kernel.org>,
	"Alex Gaynor" <alex.gaynor@gmail.com>,
	"Boqun Feng" <boqun.feng@gmail.com>,
	"Gary Guo" <gary@garyguo.net>,
	"Björn Roy Baron" <bjorn3_gh@protonmail.com>,
	"Benno Lossin" <lossin@kernel.org>,
	"Andreas Hindborg" <a.hindborg@kernel.org>,
	"Trevor Gross" <tmgross@umich.edu>,
	"John Hubbard" <jhubbard@nvidia.com>,
	"Alistair Popple" <apopple@nvidia.com>,
	"Timur Tabi" <ttabi@nvidia.com>, "Edwin Peer" <epeer@nvidia.com>,
	"Alexandre Courbot" <acourbot@nvidia.com>,
	"Andrea Righi" <arighi@nvidia.com>,
	"Andy Ritger" <aritger@nvidia.com>, "Zhi Wang" <zhiw@nvidia.com>,
	"Alexey Ivanov" <alexeyi@nvidia.com>,
	"Balbir Singh" <balbirs@nvidia.com>,
	"Philipp Stanner" <phasta@kernel.org>,
	"Elle Rhumsaa" <elle@weathered-steel.dev>,
	"Daniel Almeida" <daniel.almeida@collabora.com>,
	joel@joelfernandes.org, nouveau@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, rust-for-linux@vger.kernel.org,
	linux-doc@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
	linux-fbdev@vger.kernel.org,
	"Joel Fernandes" <joelagnelf@nvidia.com>
Subject: [PATCH RFC v6 16/26] nova-core: mm: Add page table walker for MMU v2
Date: Tue, 20 Jan 2026 15:42:53 -0500	[thread overview]
Message-ID: <20260120204303.3229303-17-joelagnelf@nvidia.com> (raw)
In-Reply-To: <20260120204303.3229303-1-joelagnelf@nvidia.com>

Add the page table walker implementation that traverses the 5-level
page table hierarchy (PDB -> L1 -> L2 -> L3 -> L4) to resolve virtual
addresses to physical addresses or find PTE locations.

The walker provides:
- walk_to_pte_lookup(): Walk existing page tables (no allocation)
- Helper functions for reading/writing PDEs and PTEs via PRAMIN

Uses GpuMm API for centralized access to PRAMIN window.

Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
 drivers/gpu/nova-core/mm/pagetable/mod.rs  |  13 +
 drivers/gpu/nova-core/mm/pagetable/walk.rs | 285 +++++++++++++++++++++
 2 files changed, 298 insertions(+)
 create mode 100644 drivers/gpu/nova-core/mm/pagetable/walk.rs

diff --git a/drivers/gpu/nova-core/mm/pagetable/mod.rs b/drivers/gpu/nova-core/mm/pagetable/mod.rs
index 72bc7cda8df6..4c77d4953fbd 100644
--- a/drivers/gpu/nova-core/mm/pagetable/mod.rs
+++ b/drivers/gpu/nova-core/mm/pagetable/mod.rs
@@ -9,12 +9,25 @@
 #![expect(dead_code)]
 pub(crate) mod ver2;
 pub(crate) mod ver3;
+pub(crate) mod walk;
 
 use super::{
+    GpuMm,
     Pfn,
     VramAddress, //
 };
 use crate::gpu::Architecture;
+use kernel::prelude::*;
+
+/// Trait for allocating page tables during page table walks.
+///
+/// Implementors must allocate a zeroed 4KB page table in VRAM and
+/// ensure the allocation persists for the lifetime of the address
+/// space and the lifetime of the implementor.
+pub(crate) trait PageTableAllocator {
+    /// Allocate a zeroed page table and return its VRAM address.
+    fn alloc_page_table(&mut self, mm: &mut GpuMm) -> Result<VramAddress>;
+}
 
 /// MMU version enumeration.
 #[derive(Debug, Clone, Copy, PartialEq, Eq)]
diff --git a/drivers/gpu/nova-core/mm/pagetable/walk.rs b/drivers/gpu/nova-core/mm/pagetable/walk.rs
new file mode 100644
index 000000000000..7a2660a30d80
--- /dev/null
+++ b/drivers/gpu/nova-core/mm/pagetable/walk.rs
@@ -0,0 +1,285 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Page table walker implementation for NVIDIA GPUs.
+//!
+//! This module provides page table walking functionality for MMU v2 (Turing/Ampere/Ada).
+//! The walker traverses the 5-level page table hierarchy (PDB -> L1 -> L2 -> L3 -> L4)
+//! to resolve virtual addresses to physical addresses or to find PTE locations.
+//!
+//! # Page Table Hierarchy
+//!
+//! ```text
+//!     +-------+     +-------+     +-------+     +---------+     +-------+
+//!     | PDB   |---->|  L1   |---->|  L2   |---->| L3 Dual |---->|  L4   |
+//!     | (L0)  |     |       |     |       |     | PDE     |     | (PTE) |
+//!     +-------+     +-------+     +-------+     +---------+     +-------+
+//!       64-bit        64-bit        64-bit        128-bit         64-bit
+//!        PDE           PDE           PDE        (big+small)        PTE
+//! ```
+//!
+//! # Result of a page table walk
+//!
+//! The walker returns a [`WalkResult`] indicating the outcome:
+//! - [`WalkResult::PageTableMissing`]: Intermediate page tables don't exist (lookup mode).
+//! - [`WalkResult::Unmapped`]: PTE exists but is invalid (page not mapped).
+//! - [`WalkResult::Mapped`]: PTE exists and is valid (page is mapped).
+//!
+//! # Example
+//!
+//! ```ignore
+//! use crate::mm::pagetable::walk::{PtWalk, WalkResult};
+//! use crate::mm::GpuMm;
+//!
+//! fn walk_example(mm: &mut GpuMm, pdb_addr: VramAddress) -> Result<()> {
+//!     // Create a page table walker.
+//!     let walker = PtWalk::new(pdb_addr, MmuVersion::V2);
+//!
+//!     // Walk to a PTE (lookup mode).
+//!     match walker.walk_to_pte_lookup(mm, Vfn::new(0x1000))? {
+//!         WalkResult::Mapped { pte_addr, pfn } => {
+//!             // Page is mapped to the physical frame number.
+//!         }
+//!         WalkResult::Unmapped { pte_addr } => {
+//!             // PTE exists but the page is not mapped.
+//!         }
+//!         WalkResult::PageTableMissing => {
+//!             // Intermediate page tables are missing.
+//!         }
+//!     }
+//!
+//!     Ok(())
+//! }
+//! ```
+
+#![allow(dead_code)]
+
+use kernel::prelude::*;
+
+use super::{
+    DualPde,
+    MmuVersion,
+    PageTableAllocator,
+    PageTableLevel,
+    Pde,
+    Pte, //
+};
+use crate::mm::{
+    pramin,
+    GpuMm,
+    Pfn,
+    Vfn,
+    VirtualAddress,
+    VramAddress, //
+};
+
+/// Dummy allocator for lookup-only walks.
+enum NoAlloc {}
+
+impl PageTableAllocator for NoAlloc {
+    fn alloc_page_table(&mut self, _mm: &mut GpuMm) -> Result<VramAddress> {
+        unreachable!()
+    }
+}
+
+/// Result of walking to a PTE.
+#[derive(Debug, Clone, Copy)]
+pub(crate) enum WalkResult {
+    /// Intermediate page tables are missing (only returned in lookup mode).
+    PageTableMissing,
+    /// PTE exists but is invalid (page not mapped).
+    Unmapped { pte_addr: VramAddress },
+    /// PTE exists and is valid (page is mapped).
+    Mapped { pte_addr: VramAddress, pfn: Pfn },
+}
+
+/// Page table walker for NVIDIA GPUs.
+///
+/// Walks the 5-level page table hierarchy to find PTE locations or resolve
+/// virtual addresses.
+pub(crate) struct PtWalk {
+    pdb_addr: VramAddress,
+    mmu_version: MmuVersion,
+}
+
+impl PtWalk {
+    /// Create a new page table walker.
+    ///
+    /// Copies `pdb_addr` and `mmu_version` from VMM configuration.
+    pub(crate) fn new(pdb_addr: VramAddress, mmu_version: MmuVersion) -> Self {
+        Self {
+            pdb_addr,
+            mmu_version,
+        }
+    }
+
+    /// Get the MMU version this walker is configured for.
+    pub(crate) fn mmu_version(&self) -> MmuVersion {
+        self.mmu_version
+    }
+
+    /// Get the Page Directory Base address.
+    pub(crate) fn pdb_addr(&self) -> VramAddress {
+        self.pdb_addr
+    }
+
+    /// Walk to PTE for lookup only (no allocation).
+    ///
+    /// Returns `PageTableMissing` if intermediate tables don't exist.
+    pub(crate) fn walk_to_pte_lookup(&self, mm: &mut GpuMm, vfn: Vfn) -> Result<WalkResult> {
+        self.walk_to_pte_inner::<NoAlloc>(mm, None, vfn)
+    }
+
+    /// Walk to PTE with allocation of missing tables.
+    ///
+    /// Uses `PageTableAllocator::alloc_page_table()` when tables are missing.
+    pub(crate) fn walk_to_pte_allocate<A: PageTableAllocator>(
+        &self,
+        mm: &mut GpuMm,
+        allocator: &mut A,
+        vfn: Vfn,
+    ) -> Result<WalkResult> {
+        self.walk_to_pte_inner(mm, Some(allocator), vfn)
+    }
+
+    /// Internal walk implementation.
+    ///
+    /// If `allocator` is `Some`, allocates missing page tables. Otherwise returns
+    /// `PageTableMissing` when intermediate tables don't exist.
+    fn walk_to_pte_inner<A: PageTableAllocator>(
+        &self,
+        mm: &mut GpuMm,
+        mut allocator: Option<&mut A>,
+        vfn: Vfn,
+    ) -> Result<WalkResult> {
+        let va = VirtualAddress::from(vfn);
+        let mut cur_table = self.pdb_addr;
+
+        // Walk through PDE levels (PDB -> L1 -> L2 -> L3).
+        for level in PageTableLevel::pde_levels() {
+            let idx = va.level_index(level.as_index());
+
+            if level.is_dual_pde_level() {
+                // L3: 128-bit dual PDE. This is the final PDE level before PTEs and uses
+                // a special "dual" format that can point to both a Small Page Table (SPT)
+                // for 4KB pages and a Large Page Table (LPT) for 64KB pages, or encode a
+                // 2MB huge page directly via IS_PTE bit.
+                let dpde_addr = entry_addr(cur_table, level, idx);
+                let dual_pde = read_dual_pde(mm.pramin(), dpde_addr, self.mmu_version)?;
+
+                // Check if SPT (Small Page Table) pointer is present. We use the "small"
+                // path for 4KB pages (only page size currently supported). If missing and
+                // allocator is available, create a new page table; otherwise return
+                // `PageTableMissing` for lookup-only walks.
+                if !dual_pde.has_small() {
+                    if let Some(ref mut a) = allocator {
+                        let new_table = a.alloc_page_table(mm)?;
+                        let new_dual_pde =
+                            DualPde::new_small(self.mmu_version, Pfn::from(new_table));
+                        write_dual_pde(mm.pramin(), dpde_addr, &new_dual_pde)?;
+                        cur_table = new_table;
+                    } else {
+                        return Ok(WalkResult::PageTableMissing);
+                    }
+                } else {
+                    cur_table = dual_pde.small_vram_address();
+                }
+            } else {
+                // Regular 64-bit PDE (levels PDB, L1, L2). Each entry points to the next
+                // level page table.
+                let pde_addr = entry_addr(cur_table, level, idx);
+                let pde = read_pde(mm.pramin(), pde_addr, self.mmu_version)?;
+
+                // Allocate new page table if PDE is invalid and allocator provided,
+                // otherwise return PageTableMissing for lookup-only walks.
+                if !pde.is_valid() {
+                    if let Some(ref mut a) = allocator {
+                        let new_table = a.alloc_page_table(mm)?;
+                        let new_pde = Pde::new_vram(self.mmu_version, Pfn::from(new_table));
+                        write_pde(mm.pramin(), pde_addr, new_pde)?;
+                        cur_table = new_table;
+                    } else {
+                        return Ok(WalkResult::PageTableMissing);
+                    }
+                } else {
+                    cur_table = pde.table_vram_address();
+                }
+            }
+        }
+
+        // Now at L4 (PTE level).
+        let pte_idx = va.level_index(PageTableLevel::L4.as_index());
+        let pte_addr = entry_addr(cur_table, PageTableLevel::L4, pte_idx);
+
+        // Read PTE to check if mapped.
+        let pte = read_pte(mm.pramin(), pte_addr, self.mmu_version)?;
+        if pte.is_valid() {
+            Ok(WalkResult::Mapped {
+                pte_addr,
+                pfn: pte.frame_number(),
+            })
+        } else {
+            Ok(WalkResult::Unmapped { pte_addr })
+        }
+    }
+}
+
+// ====================================
+// Helper functions for accessing VRAM
+// ====================================
+
+/// Calculate the address of an entry within a page table.
+fn entry_addr(table: VramAddress, level: PageTableLevel, index: u64) -> VramAddress {
+    let entry_size = level.entry_size() as u64;
+    VramAddress::new(table.raw() as u64 + index * entry_size)
+}
+
+/// Read a PDE from VRAM.
+pub(crate) fn read_pde(
+    pramin: &mut pramin::Window,
+    addr: VramAddress,
+    mmu_version: MmuVersion,
+) -> Result<Pde> {
+    let val = pramin.try_read64(addr.raw())?;
+    Ok(Pde::new(mmu_version, val))
+}
+
+/// Write a PDE to VRAM.
+pub(crate) fn write_pde(pramin: &mut pramin::Window, addr: VramAddress, pde: Pde) -> Result {
+    pramin.try_write64(addr.raw(), pde.raw_u64())
+}
+
+/// Read a dual PDE (128-bit) from VRAM.
+pub(crate) fn read_dual_pde(
+    pramin: &mut pramin::Window,
+    addr: VramAddress,
+    mmu_version: MmuVersion,
+) -> Result<DualPde> {
+    let lo = pramin.try_read64(addr.raw())?;
+    let hi = pramin.try_read64(addr.raw() + 8)?;
+    Ok(DualPde::new(mmu_version, lo, hi))
+}
+
+/// Write a dual PDE (128-bit) to VRAM.
+pub(crate) fn write_dual_pde(
+    pramin: &mut pramin::Window,
+    addr: VramAddress,
+    dual_pde: &DualPde,
+) -> Result {
+    pramin.try_write64(addr.raw(), dual_pde.big_raw_u64())?;
+    pramin.try_write64(addr.raw() + 8, dual_pde.small_raw_u64())
+}
+
+/// Read a PTE from VRAM.
+pub(crate) fn read_pte(
+    pramin: &mut pramin::Window,
+    addr: VramAddress,
+    mmu_version: MmuVersion,
+) -> Result<Pte> {
+    let val = pramin.try_read64(addr.raw())?;
+    Ok(Pte::new(mmu_version, val))
+}
+
+/// Write a PTE to VRAM.
+pub(crate) fn write_pte(pramin: &mut pramin::Window, addr: VramAddress, pte: Pte) -> Result {
+    pramin.try_write64(addr.raw(), pte.raw_u64())
+}
-- 
2.34.1


  parent reply	other threads:[~2026-01-20 20:44 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-20 20:42 [PATCH RFC v6 00/26] nova-core: Memory management infrastructure (v6) Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 01/26] rust: clist: Add support to interface with C linked lists Joel Fernandes
2026-01-20 23:48   ` Gary Guo
2026-01-21 19:50     ` Joel Fernandes
2026-01-21 20:36       ` Gary Guo
2026-01-21 20:41         ` Joel Fernandes
2026-01-21 20:46         ` Joel Fernandes
2026-01-25  1:51         ` Joel Fernandes
2026-01-21  7:27   ` Zhi Wang
2026-01-21 18:12     ` Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 02/26] gpu: Move DRM buddy allocator one level up Joel Fernandes
2026-02-05 20:55   ` Dave Airlie
2026-02-06  1:04     ` Joel Fernandes
2026-02-06  1:07       ` Dave Airlie
2026-01-20 20:42 ` [PATCH RFC v6 03/26] rust: gpu: Add GPU buddy allocator bindings Joel Fernandes
2026-02-04  3:55   ` Dave Airlie
2026-02-05  1:00     ` Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 04/26] nova-core: mm: Select GPU_BUDDY for VRAM allocation Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 05/26] nova-core: mm: Add support to use PRAMIN windows to write to VRAM Joel Fernandes
2026-01-21  8:07   ` Zhi Wang
2026-01-21 17:52     ` Joel Fernandes
2026-01-22 23:16       ` Joel Fernandes
2026-01-23 10:13         ` Zhi Wang
2026-01-23 12:59           ` Joel Fernandes
2026-01-28 12:04         ` Danilo Krummrich
2026-01-28 15:27           ` Joel Fernandes
2026-01-29  0:09             ` Danilo Krummrich
2026-01-29  1:02               ` John Hubbard
2026-01-29  1:49                 ` Joel Fernandes
2026-01-29  1:28               ` Joel Fernandes
2026-01-30  0:26           ` Joel Fernandes
2026-01-30  1:11             ` John Hubbard
2026-01-30  1:59               ` Joel Fernandes
2026-01-30  3:38                 ` John Hubbard
2026-01-30 21:14                   ` Joel Fernandes
2026-01-31  3:00                     ` Dave Airlie
2026-01-31  3:21                       ` John Hubbard
2026-01-31 20:08                         ` Joel Fernandes
2026-01-31 20:02                       ` Joel Fernandes
2026-02-02  9:12                       ` Christian König
2026-02-04 23:42                         ` Joel Fernandes
2026-01-30  1:16             ` Gary Guo
2026-01-30  1:45               ` Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 06/26] docs: gpu: nova-core: Document the PRAMIN aperture mechanism Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 07/26] nova-core: Add BAR1 aperture type and size constant Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 08/26] nova-core: gsp: Add BAR1 PDE base accessors Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 09/26] nova-core: mm: Add common memory management types Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 10/26] nova-core: mm: Add common types for all page table formats Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 11/26] nova-core: mm: Add MMU v2 page table types Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 12/26] nova-core: mm: Add MMU v3 " Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 13/26] nova-core: mm: Add unified page table entry wrapper enums Joel Fernandes
2026-01-21  9:54   ` Zhi Wang
2026-01-21 18:35     ` Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 14/26] nova-core: mm: Add TLB flush support Joel Fernandes
2026-01-21  9:59   ` Zhi Wang
2026-01-21 18:45     ` Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 15/26] nova-core: mm: Add GpuMm centralized memory manager Joel Fernandes
2026-01-20 20:42 ` Joel Fernandes [this message]
2026-01-20 20:42 ` [PATCH RFC v6 17/26] nova-core: mm: Add Virtual Memory Manager Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 18/26] nova-core: mm: Add virtual address range tracking to VMM Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 19/26] nova-core: mm: Add BAR1 user interface Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 20/26] nova-core: gsp: Return GspStaticInfo and FbLayout from boot() Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 21/26] nova-core: mm: Add memory management self-tests Joel Fernandes
2026-01-20 20:42 ` [PATCH RFC v6 22/26] nova-core: mm: Add PRAMIN aperture self-tests Joel Fernandes
2026-01-20 20:43 ` [PATCH RFC v6 23/26] nova-core: gsp: Extract usable FB region from GSP Joel Fernandes
2026-01-20 20:43 ` [PATCH RFC v6 24/26] nova-core: fb: Add usable_vram field to FbLayout Joel Fernandes
2026-01-20 20:43 ` [PATCH RFC v6 25/26] nova-core: mm: Use usable VRAM region for buddy allocator Joel Fernandes
2026-01-20 20:43 ` [PATCH RFC v6 26/26] nova-core: mm: Add BarUser to struct Gpu and create at boot Joel Fernandes
2026-01-28 11:37 ` [PATCH RFC v6 00/26] nova-core: Memory management infrastructure (v6) Danilo Krummrich
2026-01-28 12:44   ` Joel Fernandes
2026-01-29  0:01     ` Danilo Krummrich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260120204303.3229303-17-joelagnelf@nvidia.com \
    --to=joelagnelf@nvidia.com \
    --cc=a.hindborg@kernel.org \
    --cc=acourbot@nvidia.com \
    --cc=airlied@gmail.com \
    --cc=alex.gaynor@gmail.com \
    --cc=alexander.deucher@amd.com \
    --cc=alexeyi@nvidia.com \
    --cc=aliceryhl@google.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=apopple@nvidia.com \
    --cc=arighi@nvidia.com \
    --cc=aritger@nvidia.com \
    --cc=balbirs@nvidia.com \
    --cc=bjorn3_gh@protonmail.com \
    --cc=boqun.feng@gmail.com \
    --cc=christian.koenig@amd.com \
    --cc=corbet@lwn.net \
    --cc=dakr@kernel.org \
    --cc=daniel.almeida@collabora.com \
    --cc=deller@gmx.de \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=elle@weathered-steel.dev \
    --cc=epeer@nvidia.com \
    --cc=gary@garyguo.net \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jani.nikula@linux.intel.com \
    --cc=jhubbard@nvidia.com \
    --cc=joel@joelfernandes.org \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fbdev@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lossin@kernel.org \
    --cc=lucas.demarchi@intel.com \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=matthew.auld@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=mripard@kernel.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=ojeda@kernel.org \
    --cc=phasta@kernel.org \
    --cc=ray.huang@amd.com \
    --cc=rodrigo.vivi@intel.com \
    --cc=rust-for-linux@vger.kernel.org \
    --cc=simona@ffwll.ch \
    --cc=thomas.hellstrom@linux.intel.com \
    --cc=tmgross@umich.edu \
    --cc=ttabi@nvidia.com \
    --cc=tursulin@ursulin.net \
    --cc=tzimmermann@suse.de \
    --cc=zhiw@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox