public inbox for rust-for-linux@vger.kernel.org
 help / color / mirror / Atom feed
From: "Eliot Courtney" <ecourtney@nvidia.com>
To: "Alistair Popple" <apopple@nvidia.com>,
	"Eliot Courtney" <ecourtney@nvidia.com>
Cc: "Danilo Krummrich" <dakr@kernel.org>,
	"Alice Ryhl" <aliceryhl@google.com>,
	"Alexandre Courbot" <acourbot@nvidia.com>,
	"David Airlie" <airlied@gmail.com>,
	"Simona Vetter" <simona@ffwll.ch>,
	"John Hubbard" <jhubbard@nvidia.com>,
	"Joel Fernandes" <joelagnelf@nvidia.com>,
	"Timur Tabi" <ttabi@nvidia.com>, <rust-for-linux@vger.kernel.org>,
	<dri-devel@lists.freedesktop.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2 6/9] gpu: nova-core: use KVVec for SBufferIter flush
Date: Wed, 25 Mar 2026 16:43:42 +0900	[thread overview]
Message-ID: <DHBP16AEXWIJ.33T5PPE3ZRMHI@nvidia.com> (raw)
In-Reply-To: <abzMyv_WHE4CUyFf@nvdebian.thelocal>

On Fri Mar 20, 2026 at 1:32 PM JST, Alistair Popple wrote:
> On 2026-03-18 at 18:14 +1100, Eliot Courtney <ecourtney@nvidia.com> wrote...
>> Change flush_into_kvec to return KVVec instead of KVec. KVVec uses
>> vmalloc for large allocations, which is appropriate since RPC reply
>> payloads can be large (>=20 KiB).
>
> Out of curiosity do you know if there is any upper limit on payload size?

IIRC the largest one I saw in openrm was a few hundred KiB.
Theoretically, the largest complete payload you could have in a single
message is ~64 KiB since we don't support continuation records on the
receive path.

>
> And is there any concern about performance of vmalloc() vs. kmalloc() for RPC
> messages?

`KVVec` uses `KVmalloc` which tries `Kmalloc` first. Most of the time,
`Kmalloc` should work. So this shouldn't regress performance really in
the common case, and it's required for the longer RPCs.

>
>> Update GspSequence to use KVVec accordingly.
>> 
>> Signed-off-by: Eliot Courtney <ecourtney@nvidia.com>
>> ---
>>  drivers/gpu/nova-core/gsp/sequencer.rs | 4 ++--
>>  drivers/gpu/nova-core/sbuffer.rs       | 6 +++---
>>  2 files changed, 5 insertions(+), 5 deletions(-)
>> 
>> diff --git a/drivers/gpu/nova-core/gsp/sequencer.rs b/drivers/gpu/nova-core/gsp/sequencer.rs
>> index 474e4c8021db..c8f587d2d57b 100644
>> --- a/drivers/gpu/nova-core/gsp/sequencer.rs
>> +++ b/drivers/gpu/nova-core/gsp/sequencer.rs
>> @@ -42,7 +42,7 @@ struct GspSequence {
>>      /// Current command index for error reporting.
>>      cmd_index: u32,
>>      /// Command data buffer containing the sequence of commands.
>> -    cmd_data: KVec<u8>,
>> +    cmd_data: KVVec<u8>,
>>  }
>>  
>>  impl MessageFromGsp for GspSequence {
>> @@ -54,7 +54,7 @@ fn read(
>>          msg: &Self::Message,
>>          sbuffer: &mut SBufferIter<array::IntoIter<&[u8], 2>>,
>>      ) -> Result<Self, Self::InitError> {
>> -        let cmd_data = sbuffer.flush_into_kvec(GFP_KERNEL)?;
>> +        let cmd_data = sbuffer.read_to_vec(GFP_KERNEL)?;
>>          Ok(GspSequence {
>>              cmd_index: msg.cmd_index(),
>>              cmd_data,
>> diff --git a/drivers/gpu/nova-core/sbuffer.rs b/drivers/gpu/nova-core/sbuffer.rs
>> index 3a41d224c77a..ae2facdcbdd4 100644
>> --- a/drivers/gpu/nova-core/sbuffer.rs
>> +++ b/drivers/gpu/nova-core/sbuffer.rs
>> @@ -162,11 +162,11 @@ pub(crate) fn read_exact(&mut self, mut dst: &mut [u8]) -> Result {
>>          Ok(())
>>      }
>>  
>> -    /// Read all the remaining data into a [`KVec`].
>> +    /// Read all the remaining data into a [`KVVec`].
>>      ///
>>      /// `self` will be empty after this operation.
>> -    pub(crate) fn flush_into_kvec(&mut self, flags: kernel::alloc::Flags) -> Result<KVec<u8>> {
>> -        let mut buf = KVec::<u8>::new();
>> +    pub(crate) fn read_to_vec(&mut self, flags: kernel::alloc::Flags) -> Result<KVVec<u8>> {
>> +        let mut buf = KVVec::<u8>::new();
>>  
>>          if let Some(slice) = core::mem::take(&mut self.cur_slice) {
>>              buf.extend_from_slice(slice, flags)?;
>> 
>> -- 
>> 2.53.0
>> 


  reply	other threads:[~2026-03-25  7:43 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-18  7:13 [PATCH v2 0/9] gpu: nova-core: gsp: add RM control command infrastructure Eliot Courtney
2026-03-18  7:13 ` [PATCH v2 1/9] gpu: nova-core: gsp: add NV_STATUS error code bindings Eliot Courtney
2026-03-20  4:10   ` Alistair Popple
2026-03-18  7:13 ` [PATCH v2 2/9] gpu: nova-core: gsp: add NvStatus enum for RM control errors Eliot Courtney
2026-03-18  7:13 ` [PATCH v2 3/9] gpu: nova-core: gsp: expose GSP-RM internal client and subdevice handles Eliot Courtney
2026-03-18  7:14 ` [PATCH v2 4/9] gpu: nova-core: gsp: add RM control RPC structure binding Eliot Courtney
2026-03-20  4:19   ` Alistair Popple
2026-03-18  7:14 ` [PATCH v2 5/9] gpu: nova-core: gsp: add types for RM control RPCs Eliot Courtney
2026-03-20  4:26   ` Alistair Popple
2026-03-18  7:14 ` [PATCH v2 6/9] gpu: nova-core: use KVVec for SBufferIter flush Eliot Courtney
2026-03-20  4:32   ` Alistair Popple
2026-03-25  7:43     ` Eliot Courtney [this message]
2026-03-18  7:14 ` [PATCH v2 7/9] gpu: nova-core: gsp: add RM control command infrastructure Eliot Courtney
2026-03-18 12:35   ` Danilo Krummrich
2026-03-19  1:06     ` Eliot Courtney
2026-03-20 14:42       ` Alexandre Courbot
2026-03-25  3:28         ` Eliot Courtney
2026-03-18  7:14 ` [PATCH v2 8/9] gpu: nova-core: gsp: add CE fault method buffer size bindings Eliot Courtney
2026-03-18  7:14 ` [PATCH v2 9/9] gpu: nova-core: gsp: add CeGetFaultMethodBufferSize RM control command Eliot Courtney
2026-03-20 13:27   ` Danilo Krummrich
2026-03-25 12:13     ` Eliot Courtney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DHBP16AEXWIJ.33T5PPE3ZRMHI@nvidia.com \
    --to=ecourtney@nvidia.com \
    --cc=acourbot@nvidia.com \
    --cc=airlied@gmail.com \
    --cc=aliceryhl@google.com \
    --cc=apopple@nvidia.com \
    --cc=dakr@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jhubbard@nvidia.com \
    --cc=joelagnelf@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rust-for-linux@vger.kernel.org \
    --cc=simona@ffwll.ch \
    --cc=ttabi@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox