rust-for-linux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alice Ryhl <aliceryhl@google.com>
To: Mateusz Guzik <mjguzik@gmail.com>
Cc: rust-for-linux@vger.kernel.org
Subject: Re: looking for an informed write up on how to handle serious multicore issues in rust
Date: Tue, 22 Apr 2025 19:37:34 +0000	[thread overview]
Message-ID: <aAfv_pgxUHJSEaBy@google.com> (raw)
In-Reply-To: <CAGudoHEN2NBYjFs28g2Oxe7LsP1A=guPUEbgAJTUiae6bBG6JA@mail.gmail.com>

On Tue, Apr 22, 2025 at 04:44:20PM +0200, Mateusz Guzik wrote:
> If one was to write C code to handle this *with* safety belts, it
> would be a lot of hand-rolled asserts and even then there is some
> possibility of a screw up.
> 
> Note in this case the ->name_safe_to_read array might *change* as it
> is being used to compare in name_equal(). False positive/negative
> mismatch is handled with the sequence counter.
> 
> So I would like a code sample in Rust which handles the above in a
> safe and performance-friendly manner. Of course preferably code trying
> to misuse the struct would fail to compile, but a debug-only runtime
> check would do just fine.
> 
> Most notably:
> - the code issuing name check + key read *must* validate the sequence
> counter before returning
> - need_the_lock_to_access_in_any_way must not be accessible without
> the lock (duh)
> - no writes to the read-mostly area without the lock held, except for
> the atomic var
> 
> So how do you do this in Rust? I did not cover enough, I can write
> self-contained fully-fledged C example.
> 
> -- 
> Mateusz Guzik <mjguzik gmail.com>

There isn't a single canonical implementation of seqlocks in Rust that I
can point you to, and we don't have one in the kernel. But we can go
through how we can design one. I'll start with a pretty bare bones API
and expand it.

To start with, I'm going to assume that we have a Seqnum implementation.

pub struct Seqnum { ... }

impl Seqnum {
    pub fn begin(&self) -> usize;
    pub fn retry(&self, num: usize) -> bool;
    pub fn write_begin(&self);
    pub fn write_end(&self);
}

I will not get in to how you implement these. You can use the same
implementation as C. I understand that the kernel implements them with
inline assembly, which we could replicate. Or we could do a number of
other things. Regardless, this is enough to get us a working but
bare-bones implementation:

#[repr(C)]
struct Meh {
    seq: Seqnum,
    safe_to_read: AtomicUsize,
    name_safe_to_read: [AtomicU8; NAME_SIZE],
    modifiable_without_the_lock: AtomicI32,
    // some other fields ...
    _cache: Cachealign, /* type with size 0 and alignment 64 */
    lock: SpinLock<MehInner>,
}

struct MehInner {
    needs_the_lock: usize,
}

fn name_equal<const LEN: usize>(name: &[AtomicU8; LEN], cmp: &CStr) -> bool {
    // Left as an excercise for the reader.
    //
    // Probably you just call the C implementation of strncmp, and this
    // function is probably defined in `kernel`.
}
fn name_copy<const LEN: usize>(dst: &[AtomicU8; LEN], src: &CStr) -> bool {
    // Ditto.
}

impl Meh {
    fn name_to_key(&self, name: &CStr) -> Option<usize> {
        let seq = self.seq.begin();
        if name_equal(&self.name_safe_to_read, name) {
            let key = self.safe_to_read.load(Relaxed);
            if !self.seq.retry(seq) {
                return key;
            }
        }

        let _guard = self.lock.lock();
        if name_equal(&self.name_safe_to_read, name) {
            Some(self.safe_to_read.load(Relaxed))
        } else {
            None
        }
    }

    fn store_name_key(&self, name: &CStr, key: usize) {
        assert!(name.len_with_nul() < NAME_SIZE);
        let _guard = self.lock.lock();
        self.seq.write_begin();
        name_copy(&self.name_safe_to_read, name);
        self.safe_to_read.write(key, Relaxed);
        self.seq.write_end();
    }
}

Your first reaction is probably "ATOMICS!?". But hold your horses for a
moment.

The reason we can't use the ordinary `usize` type is because we need to
*disallow* you from performing ordinary unsynchronized reads; similar to
how you used READ_ONCE instead of just reading the field normally. I
used atomics in this example to achieve that, but it is true that
atomics are stronger and more expensive than what we need. We might
replace AtomicUsize with a new type called SharedUsize (naming TBD!)
that has `load` and `store` methods which are weaker than the relaxed
atomic ops. For example, even relaxed loads on AtomicUsize have
additional costs associated with them because they're guaranteed to
avoid tearing.

But I digress. The point is that we need a special version of usize like
AtomicUsize that allows us to annotate that these loads and stores are
not normal loads and stores because other threads may also touch the
values in parallel. It has to be a separate type from usize to prevent
you from performing ordinary unsynchronized reads.


Let's get back to the example. The bigger problem is that there is no
enforcement of correct use at all. After all, we're just using atomics,
and any combination of reads and writes from different threads is safe
no matter what, so the compiler doesn't care.

To start with improving the API, we can wrap Seqnum in a Seqlock type.
It could look like this:

pub struct SeqLock<T> {
    seqnum: Seqnum,
    value: T,
}

impl<T> SeqLock<T> {
    pub fn try_read<U>(&self, f: impl FnOnce(&T) -> U) -> Option<U> {
        let seq = self.seqnum.begin();
        let ret = f(&self.value);
        if self.seqnum.retry(seq) {
            None
        } else {
            Some(ret)
        }
    }

    pub fn assume_no_writers(&self) -> &T {
        &self.value
    }

    pub fn write(&self) -> WriteGuard<'_, T> {
        self.seqnum.write_begin();
        WriteGuard(self)
    }
}

struct WriteGuard<'a, T>(&'a SeqLock<T>);
impl<T> Drop for WriteGuard<'_, T> {
    fn drop(&mut self) {
        self.0.seqnum.write_end();
    }
}
impl<T> Deref for WriteGuard<'_, T> {
    type Target = T;
    fn deref(&self) -> &T { &self.0.value }
}

This API is still not fool-proof - we're missing the part where it knows
about the spinlock, but at the very least you have to explicitly
annotate each access with a call to `try_read`, `assume_no_writers`, or
`write`. This is how you might use it:

#[repr(C)]
struct Meh {
    seq: SeqLock<MehSeq>,
    modifiable_without_the_lock: AtomicI32,
    // some other fields ...
    _cache: Cachealign, /* type with size 0 and alignment 64 */
    lock: SpinLock<MehInner>,
}

struct MehSeq {
    safe_to_read: SharedUsize,
    name_safe_to_read: [SharedU8; NAME_SIZE],
}

struct MehInner {
    needs_the_lock: usize,
}

impl Meh {
    fn name_to_key(&self, name: &CStr) -> Option<usize> {
        let ret = self.seq.try_read(|inner| {
            if name_equal(&inner.name_safe_to_read, name) {
                Some(inner.safe_to_read.load());
            } else {
                None
            }
        });

        if let Some(ret) = ret {
            return ret;
        }

        let _guard = self.lock.lock();
        let inner = self.seq.assume_no_writers();
        if name_equal(&inner.name_safe_to_read, name) {
            Some(inner.safe_to_read.load())
        } else {
            None
        }
    }

    fn store_name_key(&self, name: &CStr, key: usize) {
        assert!(name.len_with_nul() < NAME_SIZE);
        let _guard = self.lock.lock();
        let inner = self.seq.write();
        name_copy(&inner.name_safe_to_read, name);
        inner.safe_to_read.store(key);

        // destructor of inner ends the seqlock write region
    }
}

Notice how the fields protected by the seqlock are moved into a new
sub-struct similar to MehInner.

The `try_read` method is also interesting. It throws away your return
value if you need to retry to discourage you from using invalid values.

Anyway. There's still another big issue with the design: when you use
try_read or assume_no_writers, it still lets you write to the data.
After all, we're using atomics which we can write to at any time.

There are many ways to improve the API to prevent such writes. I'll
share one here: If you have write access, then that means that you have
a WriteGuard object. So we can have you pass a reference to it:

impl SharedUsize {
    pub fn load(&self) -> usize;
    pub fn store<T>(&self, value: usize, proof: &WriteGuard<'_, T>);
}

This way, readers can't call `store` because they don't have a
WriteGuard. The writer becomes:

    fn store_name_key(&self, name: &CStr, key: usize) {
        assert!(name.len_with_nul() < NAME_SIZE);
        let _guard = self.lock.lock();
        let inner = self.seq.write();
        name_copy(&inner.name_safe_to_read, name, &inner);
        inner.safe_to_read.store(key, &inner);

        // destructor of inner ends the seqlock write region
    }

This isn't the only way to solve this problem - there are others. Not
all of them are fool-proof - including this one. For example, I could
get a WriteGuard from one SeqLock and use it with a SharedUsize stored
in *another* SeqLock to bypass it. Other solutions *are* fool-proof and
don't have this bypass. Some of the other Rust folks are probably happy
to tell you about field projections which is an upcoming language
feature that lets you do this (and other things such as rcu) in a
fool-proof way with much more convenient syntax.

Anyway, since the reads and writes are just atomics, Rust the language
doesn't care too much if there's a bypass. It's mostly something that we
might care about as API designers.



Ok, let's return to another problem in the design, which is that it's
not tied together with the spinlock. This means that you could call
write() or assume_no_writers() without holding it.

The most direct way of solving this is to make a single abstraction that
wraps the entire thing.

#[repr(C)]
pub struct SeqLock2<T, U, V> {
    seq: Seqnum,
    seq_data: T,
    extra_data: U,
    _cache: Cachealign,
    lock: SpinLock<V>,
}

impl<T, U, V> SeqLock2<T, U, V> {
    pub fn try_read<R>(&self, f: impl FnOnce(&T) -> R) -> Option<R> {
        let seq = self.seqnum.begin();
        let ret = f(&self.value);
        if self.seqnum.retry(seq) {
            None
        } else {
            Some(ret)
        }
    }

    pub fn extra_data(&self) -> &U {
        &self.extra_data
    }

    pub fn read_with_lock(&self) -> ReadWithLockGuard<'_, T, U, V> {
        ReadWithLockGuard {
            seq: self,
            guard: self.lock.lock(),
        }
    }

    pub fn read_retry<R>(&self, mut f: impl FnMut(&T) -> R) -> R {
        if let Some(ret) = self.try_read(&mut f) {
            return ret;
        }

        f(self.read_with_lock().seq_data())
    }

    pub fn write(&self) -> WriteGuard<'a, T, U, V> {
        let guard = self.lock.lock();
        self.seqnum.write_begin();
        WriteGuard {
            seq: self,
            guard,
        }
    }
}

pub struct ReadWithLockGuard<'a, T, U, V> {
    seq: &'a SeqLock2<T, U, V>,
    guard: SpinLockGuard<'a, V>,
}
impl<'a, T, U, V> ReadWithLockGuard<'a, T, U, V> {
    pub fn seq_data(&self) -> &T {
        &self.seq.seq_data
    }
    pub fn lock_data(&self) -> &T {
        /* we hold the lock, so we also get to use the lock data */
        &self.guard
    }
}
// Ditto for WriteGuard.

By using a combined type like the above, we enforce that the spinlock
and the seqlock are used together and make it pretty hard to get that
wrong. The example becomes:

#[repr(C)]
struct Meh {
    seq: SeqLock2<MehSeq, AtomicI32, MehInner>,
}

struct MehSeq {
    safe_to_read: SharedUsize,
    name_safe_to_read: [SharedU8; NAME_SIZE],
}

struct MehInner {
    needs_the_lock: usize,
}

impl Meh {
    fn name_to_key(&self, name: &CStr) -> Option<usize> {
        self.seq.read_retry(|inner| {
            if name_equal(&inner.name_safe_to_read, name) {
                Some(inner.safe_to_read.load());
            } else {
                None
            }
        })
    }

    fn store_name_key(&self, name: &CStr, key: usize) {
        assert!(name.len_with_nul() < NAME_SIZE);

        let guard = self.seq.write();
        let inner = guard.seq_data();
        name_copy(&inner.name_safe_to_read, name);
        inner.safe_to_read.store(key);
    }
}

This API is rather nice because it lets us avoid repeating the logic in
name_to_key. The seqlock provides the retry functionality for us.

Of course it also has the disadvantage that the layout is rather more
fixed than the C version. The layout used here is probably what you want
in many cases, but the more things you enforce in the API, the more
inflexible it becomes and the fewer cases it will support. This is a
trade-off in API design.

Another way to tie the seqlock with the spinlock could be to do the same
as LockedBy: https://rust.docs.kernel.org/kernel/sync/struct.LockedBy.html

Anyway, this email has become long enough. I'll stop it here.

Alice

      parent reply	other threads:[~2025-04-22 19:37 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-22 14:44 looking for an informed write up on how to handle serious multicore issues in rust Mateusz Guzik
2025-04-22 15:46 ` Mateusz Guzik
2025-04-22 19:37 ` Alice Ryhl [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aAfv_pgxUHJSEaBy@google.com \
    --to=aliceryhl@google.com \
    --cc=mjguzik@gmail.com \
    --cc=rust-for-linux@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).