rust-for-linux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Timur Tabi <ttabi@nvidia.com>
To: "nouveau@lists.freedesktop.org" <nouveau@lists.freedesktop.org>,
	Alexandre Courbot <acourbot@nvidia.com>,
	"dakr@kernel.org" <dakr@kernel.org>,
	"lyude@redhat.com" <lyude@redhat.com>,
	Joel Fernandes <joelagnelf@nvidia.com>,
	John Hubbard <jhubbard@nvidia.com>,
	"rust-for-linux@vger.kernel.org" <rust-for-linux@vger.kernel.org>
Cc: "nouveau-bounces@lists.freedesktop.org"
	<nouveau-bounces@lists.freedesktop.org>
Subject: Re: [PATCH 10/11] gpu: nova-core: LibosMemoryRegionInitArgument size must be page aligned
Date: Thu, 4 Dec 2025 21:45:20 +0000	[thread overview]
Message-ID: <ae07b871896b79e3d6180874dfb07d4d3aed4bdc.camel@nvidia.com> (raw)
In-Reply-To: <3c6219f90ca9e1d548c98ab1c54857a63a5d94cb.camel@nvidia.com>

On Thu, 2025-12-04 at 21:18 +0000, Timur Tabi wrote:
> If you can tell me how to fix the CoherentAllocation allocation call so that it allocates 4K, that
> would be a better fix for this problem.  It's only necessary for Turing/GA100, however.

Actually, I think I figured it out.  

/// Arguments for GSP startup.
#[repr(C)]
pub(crate) struct GspArgumentsCached(
    bindings::GSP_ARGUMENTS_CACHED,
    [u8; GSP_PAGE_SIZE - core::mem::size_of::<bindings::GSP_ARGUMENTS_CACHED>()], // Padding to make
it 4K
);

impl GspArgumentsCached {
    /// Creates the arguments for starting the GSP up using `cmdq` as its command queue.
    pub(crate) fn new(cmdq: &Cmdq) -> Self {
        Self(
            bindings::GSP_ARGUMENTS_CACHED {
                messageQueueInitArguments: MessageQueueInitArguments::new(cmdq).0,
                bDmemStack: 1,
                ..Default::default()
            },
            [0u8; GSP_PAGE_SIZE - core::mem::size_of::<bindings::GSP_ARGUMENTS_CACHED>()],
        )
    }
}

If you think this is okay, I'll put it in v3.  I had to remove the #[repr(transparent)].  Is that
okay?  The code compiles and seems to work.

  reply	other threads:[~2025-12-04 21:45 UTC|newest]

Thread overview: 100+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-14 23:30 [PATCH 00/11] gpu: nova-core: add Turing support Timur Tabi
2025-11-14 23:30 ` [PATCH 01/11] gpu: nova-core: rename Imem to ImemSec Timur Tabi
2025-11-17 22:50   ` Lyude Paul
2025-11-14 23:30 ` [PATCH 02/11] gpu: nova-core: add ImemNs section infrastructure Timur Tabi
2025-11-17 23:19   ` Lyude Paul
2025-11-19  1:54   ` Alexandre Courbot
2025-11-19  6:30     ` John Hubbard
2025-11-19  6:55       ` Alexandre Courbot
2025-11-19 19:54         ` Timur Tabi
2025-11-19 20:34           ` Joel Fernandes
2025-11-19 20:45             ` Timur Tabi
2025-11-19 20:54               ` John Hubbard
2025-11-19 20:56                 ` Timur Tabi
2025-11-20  1:45           ` Alexandre Courbot
2025-11-24 22:24             ` Timur Tabi
2025-11-14 23:30 ` [PATCH 03/11] gpu: nova-core: support header parsing on Turing/GA100 Timur Tabi
2025-11-17 22:33   ` Joel Fernandes
2025-11-18  0:52     ` Timur Tabi
2025-11-18  1:04       ` Joel Fernandes
2025-11-18  1:06         ` Timur Tabi
2025-11-18  1:15           ` John Hubbard
2025-11-18  1:29             ` John Hubbard
2025-11-18  1:12         ` John Hubbard
2025-11-18 19:42           ` Joel Fernandes
2025-11-19  2:51   ` Alexandre Courbot
2025-11-19  5:16     ` Timur Tabi
2025-11-19  7:03       ` Alexandre Courbot
2025-11-24 23:24         ` Timur Tabi
2025-11-24 23:54           ` Alexandre Courbot
2025-11-19  7:04       ` John Hubbard
2025-11-19 20:10         ` Joel Fernandes
2025-11-24 23:47           ` Timur Tabi
2025-11-24 23:55             ` John Hubbard
2025-11-25  0:57               ` Alexandre Courbot
2025-11-25  1:02                 ` Timur Tabi
2025-11-25  0:05             ` Joel Fernandes
2025-11-14 23:30 ` [PATCH 04/11] gpu: nova-core: add support for Turing/GA100 fwsignature Timur Tabi
2025-11-17 23:20   ` Lyude Paul
2025-11-19  2:59   ` Alexandre Courbot
2025-11-19  5:17     ` Timur Tabi
2025-11-19  7:11     ` Alexandre Courbot
2025-11-19  7:17       ` John Hubbard
2025-11-19  7:34         ` Alexandre Courbot
2025-11-14 23:30 ` [PATCH 05/11] gpu: nova-core: add NV_PFALCON_FALCON_DMATRFCMD::with_falcon_mem() Timur Tabi
2025-11-19  3:04   ` Alexandre Courbot
2025-11-19  6:32     ` John Hubbard
2025-11-14 23:30 ` [PATCH 06/11] gpu: nova-core: add Turing boot registers Timur Tabi
2025-11-17 22:41   ` Joel Fernandes
2025-11-19  2:17   ` Alexandre Courbot
2025-11-19  6:34     ` John Hubbard
2025-11-19  6:47       ` Alexandre Courbot
2025-11-19  6:51         ` John Hubbard
2025-11-19  7:15           ` Alexandre Courbot
2025-11-19  7:24             ` John Hubbard
2025-11-19 19:10               ` Timur Tabi
2025-11-20  1:41                 ` Alexandre Courbot
2025-11-14 23:30 ` [PATCH 07/11] gpu: nova-core: move some functions into the HAL Timur Tabi
2025-11-14 23:30 ` [PATCH 08/11] gpu: nova-core: Add basic Turing HAL Timur Tabi
2025-11-18  0:50   ` Joel Fernandes
2025-11-19  3:11   ` Alexandre Courbot
2025-11-14 23:30 ` [PATCH 09/11] gpu: nova-core: add FalconUCodeDescV2 support Timur Tabi
2025-11-17 23:10   ` Joel Fernandes
2025-11-18 13:04     ` Alexandre Courbot
2025-11-18 15:08       ` Timur Tabi
2025-11-18 19:46         ` Joel Fernandes
2025-11-19  1:36         ` Alexandre Courbot
2025-11-18 19:45       ` Joel Fernandes
2025-11-19  6:40         ` John Hubbard
2025-11-25 23:59     ` Timur Tabi
2025-11-26  0:31       ` John Hubbard
2025-11-26  1:05         ` Alexandre Courbot
2025-11-26  1:09           ` John Hubbard
2025-11-26  9:57           ` Miguel Ojeda
2025-12-01 21:11     ` Timur Tabi
2025-11-19  3:27   ` Alexandre Courbot
2025-11-14 23:30 ` [PATCH 10/11] gpu: nova-core: LibosMemoryRegionInitArgument size must be page aligned Timur Tabi
2025-11-19  3:36   ` Alexandre Courbot
2025-12-01 23:25     ` Timur Tabi
2025-12-03 11:54       ` Alexandre Courbot
2025-12-03 12:03         ` Alice Ryhl
2025-12-03 13:39           ` Alexandre Courbot
2025-12-03 18:31         ` Timur Tabi
2025-12-04 14:43           ` Alexandre Courbot
2025-12-04 21:18             ` Timur Tabi
2025-12-04 21:45               ` Timur Tabi [this message]
2025-12-05  0:35                 ` Alexandre Courbot
2025-12-05 20:22                   ` Timur Tabi
2025-12-09  2:53                     ` Alexandre Courbot
2025-12-05 23:22                   ` Timur Tabi
2025-12-09  2:55                     ` Alexandre Courbot
2025-12-03 18:34         ` Miguel Ojeda
2025-12-03 19:17           ` Timur Tabi
2025-11-14 23:30 ` [PATCH 11/11] gpu: nova-core: add PIO support for loading firmware images Timur Tabi
2025-11-17 23:34   ` Joel Fernandes
2025-11-18 13:08     ` Alexandre Courbot
2025-12-01 23:26     ` Timur Tabi
2025-11-19  4:28   ` Alexandre Courbot
2025-11-19 13:49     ` Alexandre Courbot
2025-11-19  7:01   ` Alexandre Courbot
2025-11-19  4:29 ` [PATCH 00/11] gpu: nova-core: add Turing support Alexandre Courbot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ae07b871896b79e3d6180874dfb07d4d3aed4bdc.camel@nvidia.com \
    --to=ttabi@nvidia.com \
    --cc=acourbot@nvidia.com \
    --cc=dakr@kernel.org \
    --cc=jhubbard@nvidia.com \
    --cc=joelagnelf@nvidia.com \
    --cc=lyude@redhat.com \
    --cc=nouveau-bounces@lists.freedesktop.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=rust-for-linux@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).