rust-for-linux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: Alice Ryhl <alice@ryhl.io>
Cc: FUJITA Tomonori <fujita.tomonori@gmail.com>,
	andrew@lunn.ch, netdev@vger.kernel.org,
	rust-for-linux@vger.kernel.org, aliceryhl@google.com,
	miguel.ojeda.sandonis@gmail.com
Subject: Re: [PATCH 0/5] Rust abstractions for network device drivers
Date: Fri, 16 Jun 2023 12:10:41 -0700	[thread overview]
Message-ID: <20230616121041.4010f51b@kernel.org> (raw)
In-Reply-To: <66dcc87e-e03f-1043-c91d-25d6fa7130a1@ryhl.io>

On Fri, 16 Jun 2023 21:00:36 +0200 Alice Ryhl wrote:
> A Rust method can be defined to take the struct "by value", which 
> consumes the struct and prevents you from using it again. This can let 
> you provide many different cleanup methods that each clean it up in 
> different ways.
> 
> However, you cannot force the user to use one of those methods. They 
> always have the option of letting the value go out of scope, which calls 
> the destructor. And they can do this at any time.
> 
> That said, the destructor of the value does not necessarily *have* to 
> translate to immediately freeing the value. If the value if refcounted, 
> the destructor could just drop the refcount. It would also be possible 
> for a destructor to schedule the cleanup operation to a workqueue. Or 
> you could do something more clever.

Can we put a WARN_ON() in the destructor and expect object to never be
implicitly freed?  skbs represent packets (most of the time) and for
tracking which part of the stack is dropping packets we try to provide
a drop reason along the freed skb. It'd be great if for Rust we could
from the get-go direct everyone towards the APIs with an explicit reason
code.

  reply	other threads:[~2023-06-16 19:10 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-13  4:53 [PATCH 0/5] Rust abstractions for network device drivers FUJITA Tomonori
2023-06-13  4:53 ` [PATCH 1/5] rust: core " FUJITA Tomonori
2023-06-15 13:01   ` Benno Lossin
2023-06-21 13:13     ` FUJITA Tomonori
2023-06-25  9:52       ` Benno Lossin
2023-06-25 14:27         ` FUJITA Tomonori
2023-06-25 17:06           ` Benno Lossin
2023-06-21 22:44   ` Boqun Feng
2023-06-22  0:19     ` FUJITA Tomonori
2023-06-13  4:53 ` [PATCH 2/5] rust: add support for ethernet operations FUJITA Tomonori
2023-06-13  7:19   ` Ariel Miculas
2023-06-15 13:03   ` Benno Lossin
2023-06-15 13:44     ` Andrew Lunn
2023-06-13  4:53 ` [PATCH 3/5] rust: add support for get_stats64 in struct net_device_ops FUJITA Tomonori
2023-06-13  4:53 ` [PATCH 4/5] rust: add methods for configure net_device FUJITA Tomonori
2023-06-15 13:06   ` Benno Lossin
2023-06-13  4:53 ` [PATCH 5/5] samples: rust: add dummy network driver FUJITA Tomonori
2023-06-15 13:08   ` Benno Lossin
2023-06-22  0:23     ` FUJITA Tomonori
2023-06-15  6:01 ` [PATCH 0/5] Rust abstractions for network device drivers Jakub Kicinski
2023-06-15  8:58   ` Miguel Ojeda
2023-06-16  2:19     ` Jakub Kicinski
2023-06-16 12:18       ` FUJITA Tomonori
2023-06-16 13:23         ` Miguel Ojeda
2023-06-16 13:41           ` FUJITA Tomonori
2023-06-16 18:26           ` Jakub Kicinski
2023-06-16 20:05             ` Miguel Ojeda
2023-06-16 13:04       ` Andrew Lunn
2023-06-16 18:31         ` Jakub Kicinski
2023-06-16 13:18       ` Miguel Ojeda
2023-06-15 12:51   ` Andrew Lunn
2023-06-16  2:02     ` Jakub Kicinski
2023-06-16  3:47       ` Richard Cochran
2023-06-16 17:59         ` Andrew Lunn
2023-06-16 13:02       ` FUJITA Tomonori
2023-06-16 13:14         ` Andrew Lunn
2023-06-16 13:48           ` Miguel Ojeda
2023-06-16 14:43             ` Andrew Lunn
2023-06-16 16:01               ` Miguel Ojeda
2023-06-19 11:27               ` Emilio Cobos Álvarez
2023-06-20 18:09                 ` Miguel Ojeda
2023-06-20 19:12                   ` Andreas Hindborg (Samsung)
2023-06-21 12:30             ` Andreas Hindborg (Samsung)
2023-06-16 18:40         ` Jakub Kicinski
2023-06-16 19:00           ` Alice Ryhl
2023-06-16 19:10             ` Jakub Kicinski [this message]
2023-06-16 19:23               ` Alice Ryhl
2023-06-16 20:04                 ` Andrew Lunn
2023-06-17 10:08                   ` Alice Ryhl
2023-06-17 10:15                     ` Greg KH
2023-06-19  8:50                     ` FUJITA Tomonori
2023-06-19  9:46                       ` Greg KH
2023-06-19 11:05                         ` FUJITA Tomonori
2023-06-19 11:14                           ` Greg KH
2023-06-19 13:20                           ` Andrew Lunn
2023-06-20 11:16                             ` David Laight
2023-06-20 15:47                     ` Jakub Kicinski
2023-06-20 16:56                       ` Alice Ryhl
2023-06-20 17:44                         ` Miguel Ojeda
2023-06-20 17:55                           ` Miguel Ojeda
2023-06-16 12:28   ` Alice Ryhl
2023-06-16 13:20     ` Andrew Lunn
2023-06-16 13:24       ` Alice Ryhl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230616121041.4010f51b@kernel.org \
    --to=kuba@kernel.org \
    --cc=alice@ryhl.io \
    --cc=aliceryhl@google.com \
    --cc=andrew@lunn.ch \
    --cc=fujita.tomonori@gmail.com \
    --cc=miguel.ojeda.sandonis@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=rust-for-linux@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).