* Re: C aggregate passing (Rust kernel policy)
2025-02-20 13:23 ` H. Peter Anvin
@ 2025-02-20 15:17 ` Jan Engelhardt
2025-02-20 16:46 ` Linus Torvalds
` (3 more replies)
0 siblings, 4 replies; 194+ messages in thread
From: Jan Engelhardt @ 2025-02-20 15:17 UTC (permalink / raw)
To: H. Peter Anvin
Cc: Greg KH, Boqun Feng, Miguel Ojeda, Christoph Hellwig,
rust-for-linux, Linus Torvalds, David Airlie, linux-kernel,
ksummit
On Thursday 2025-02-20 14:23, H. Peter Anvin wrote:
>
>People writing C seem to have a real aversion for using structures
>as values (arguments, return values or assignments) even though that
>has been valid since at least C90 and can genuinely produce better
>code in some cases.
The aversion stems from compilers producing "worse" ASM to this
date, as in this case for example:
```c
#include <sys/stat.h>
extern struct stat fff();
struct stat __attribute__((noinline)) fff()
{
struct stat sb = {};
stat(".", &sb);
return sb;
}
```
Build as C++ and C and compare.
$ g++-15 -std=c++23 -O2 -x c++ -c x.c && objdump -Mintel -d x.o
$ gcc-15 -std=c23 -O2 -c x.c && objdump -Mintel -d x.o
Returning aggregates in C++ is often implemented with a secret extra
pointer argument passed to the function. The C backend does not
perform that kind of transformation automatically. I surmise ABI reasons.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-20 15:17 ` C aggregate passing (Rust kernel policy) Jan Engelhardt
@ 2025-02-20 16:46 ` Linus Torvalds
2025-02-20 20:34 ` H. Peter Anvin
` (2 subsequent siblings)
3 siblings, 0 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-20 16:46 UTC (permalink / raw)
To: Jan Engelhardt
Cc: H. Peter Anvin, Greg KH, Boqun Feng, Miguel Ojeda,
Christoph Hellwig, rust-for-linux, David Airlie, linux-kernel,
ksummit
On Thu, 20 Feb 2025 at 07:17, Jan Engelhardt <ej@inai.de> wrote:
>
>
> On Thursday 2025-02-20 14:23, H. Peter Anvin wrote:
> >
> >People writing C seem to have a real aversion for using structures
> >as values (arguments, return values or assignments) even though that
> >has been valid since at least C90 and can genuinely produce better
> >code in some cases.
>
> The aversion stems from compilers producing "worse" ASM to this
> date, as in this case for example:
We actually use structures for arguments and return values in the
kernel, and it really does generate better code - but only for
specific situations.
In particular, it really only works well for structures that fit in
two registers. That's the magic cut-off point, partly due calling
convention rules, but also due to compiler implementation issues (ie
gcc has lots of special code for two registers, I am pretty sure clang
does too).
So in the kernel, we use this whole "pass structures around by value"
(either as arguments or return values) mainly in very specific areas.
The main - and historical: we've been doing it for decades - case is
the page table entries. But there are other cases where it happens.
The other problem with aggregate data particularly for return values
is that it gets quite syntactically ugly in C. You can't do ad-hoc
things like
{ a, b } = function_with_two_return_values();
like you can in some other languages (eg python), so it only tends to
work cleanly only with things that really are "one" thing, and it gets
pretty ugly if you want to return something like an error value in
addition to some other thing.
Again, page table entries are a perfect example of where passing
aggregate values around works really well, and we have done it for a
long long time because of that.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-20 15:17 ` C aggregate passing (Rust kernel policy) Jan Engelhardt
2025-02-20 16:46 ` Linus Torvalds
@ 2025-02-20 20:34 ` H. Peter Anvin
2025-02-21 8:31 ` HUANG Zhaobin
2025-02-21 18:34 ` David Laight
3 siblings, 0 replies; 194+ messages in thread
From: H. Peter Anvin @ 2025-02-20 20:34 UTC (permalink / raw)
To: Jan Engelhardt
Cc: Greg KH, Boqun Feng, Miguel Ojeda, Christoph Hellwig,
rust-for-linux, Linus Torvalds, David Airlie, linux-kernel,
ksummit
On February 20, 2025 7:17:07 AM PST, Jan Engelhardt <ej@inai.de> wrote:
>
>On Thursday 2025-02-20 14:23, H. Peter Anvin wrote:
>>
>>People writing C seem to have a real aversion for using structures
>>as values (arguments, return values or assignments) even though that
>>has been valid since at least C90 and can genuinely produce better
>>code in some cases.
>
>The aversion stems from compilers producing "worse" ASM to this
>date, as in this case for example:
>
>```c
>#include <sys/stat.h>
>extern struct stat fff();
>struct stat __attribute__((noinline)) fff()
>{
> struct stat sb = {};
> stat(".", &sb);
> return sb;
>}
>```
>
>Build as C++ and C and compare.
>
>$ g++-15 -std=c++23 -O2 -x c++ -c x.c && objdump -Mintel -d x.o
>$ gcc-15 -std=c23 -O2 -c x.c && objdump -Mintel -d x.o
>
>Returning aggregates in C++ is often implemented with a secret extra
>pointer argument passed to the function. The C backend does not
>perform that kind of transformation automatically. I surmise ABI reasons.
The ABI is exactly the same for C and C++ in that case (hidden pointer), so that would be a code quality bug.
But I expect that that is a classic case of "no one is using it, so no one is optimizing it, so no one is using it." ... and so it has been stuck for 35 years.
But as Linus pointed out, even the C backend does quite well if the aggregate fits in two registers; pretty much every ABI I have seen pass two-machine-word return values in registers (even the ones that pass arguments on the stack.)
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-20 15:17 ` C aggregate passing (Rust kernel policy) Jan Engelhardt
2025-02-20 16:46 ` Linus Torvalds
2025-02-20 20:34 ` H. Peter Anvin
@ 2025-02-21 8:31 ` HUANG Zhaobin
2025-02-21 18:34 ` David Laight
3 siblings, 0 replies; 194+ messages in thread
From: HUANG Zhaobin @ 2025-02-21 8:31 UTC (permalink / raw)
To: ej
Cc: airlied, boqun.feng, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux, torvalds
On Thu, 20 Feb 2025 16:17:07 +0100 (CET), Jan Engelhardt <ej@inai.de> wrote:
>
> Returning aggregates in C++ is often implemented with a secret extra
> pointer argument passed to the function. The C backend does not
> perform that kind of transformation automatically. I surmise ABI reasons.
No, in both C and C++, fff accepts a secret extra pointer argument.
https://godbolt.org/z/13K9aEffe
For gcc, the difference is that `sb` is allocated then copied back in C,
while in C++ NRVO is applied so there is no extra allocation and copy.
Clang does NRVO for both C and C++ in this case, thus generating exactly
the same codes for them.
I have no idea why gcc doesn't do the same.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-20 15:17 ` C aggregate passing (Rust kernel policy) Jan Engelhardt
` (2 preceding siblings ...)
2025-02-21 8:31 ` HUANG Zhaobin
@ 2025-02-21 18:34 ` David Laight
2025-02-21 19:12 ` Linus Torvalds
2025-02-21 20:06 ` Jan Engelhardt
3 siblings, 2 replies; 194+ messages in thread
From: David Laight @ 2025-02-21 18:34 UTC (permalink / raw)
To: Jan Engelhardt
Cc: H. Peter Anvin, Greg KH, Boqun Feng, Miguel Ojeda,
Christoph Hellwig, rust-for-linux, Linus Torvalds, David Airlie,
linux-kernel, ksummit
On Thu, 20 Feb 2025 16:17:07 +0100 (CET)
Jan Engelhardt <ej@inai.de> wrote:
> On Thursday 2025-02-20 14:23, H. Peter Anvin wrote:
> >
> >People writing C seem to have a real aversion for using structures
> >as values (arguments, return values or assignments) even though that
> >has been valid since at least C90 and can genuinely produce better
> >code in some cases.
>
> The aversion stems from compilers producing "worse" ASM to this
> date, as in this case for example:
>
> ```c
> #include <sys/stat.h>
> extern struct stat fff();
> struct stat __attribute__((noinline)) fff()
> {
> struct stat sb = {};
> stat(".", &sb);
> return sb;
> }
> ```
>
> Build as C++ and C and compare.
>
> $ g++-15 -std=c++23 -O2 -x c++ -c x.c && objdump -Mintel -d x.o
> $ gcc-15 -std=c23 -O2 -c x.c && objdump -Mintel -d x.o
>
> Returning aggregates in C++ is often implemented with a secret extra
> pointer argument passed to the function. The C backend does not
> perform that kind of transformation automatically. I surmise ABI reasons.
Have you really looked at the generated code?
For anything non-trivial if gets truly horrid.
To pass a class by value the compiler has to call the C++ copy-operator to
generate a deep copy prior to the call, and then call the destructor after
the function returns - compare against passing a pointer to an existing
item (and not letting it be written to).
Returning a class member is probably worse and leads to nasty bugs.
In general the called code will have to do a deep copy from the item
being returned and then (quite likely) call the destructor for the
local variable being returned (if a function always returns a specific
local then the caller-provided temporary might be usable).
The calling code now has a temporary local variable that is going
to go out of scope (and be destructed) very shortly - I think the
next sequence point.
So you have lots of constructors, copy-operators and destructors
being called.
Then you get code like:
const char *foo = data.func().c_str();
very easily written looks fine, but foo points to garbage.
I've been going through some c++ code pretty much removing all the
places that classes get returned by value.
You can return a reference - that doesn't go out of scope.
Or, since most of the culprits are short std::string, replace them by char[].
Code is better, shorter, and actually less buggy.
(Apart from the fact that c++ makes it hard to ensure all the non-class
members are initialised.)
As Linus said, most modern ABI pass short structures in one or two registers
(or stack slots).
But aggregate returns are always done by passing a hidden pointer argument.
It is annoying that double-sized integers (u64 on 32bit and u128 on 64bit)
are returned in a register pair - but similar sized structures have to be
returned by value.
It is possible to get around this with #defines that convert the value
to a big integer (etc) - but I don't remember that actually being done.
David
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-21 18:34 ` David Laight
@ 2025-02-21 19:12 ` Linus Torvalds
2025-02-21 20:07 ` comex
2025-02-21 21:45 ` David Laight
2025-02-21 20:06 ` Jan Engelhardt
1 sibling, 2 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-21 19:12 UTC (permalink / raw)
To: David Laight
Cc: Jan Engelhardt, H. Peter Anvin, Greg KH, Boqun Feng, Miguel Ojeda,
Christoph Hellwig, rust-for-linux, David Airlie, linux-kernel,
ksummit
On Fri, 21 Feb 2025 at 10:34, David Laight <david.laight.linux@gmail.com> wrote:
>
> As Linus said, most modern ABI pass short structures in one or two registers
> (or stack slots).
> But aggregate returns are always done by passing a hidden pointer argument.
>
> It is annoying that double-sized integers (u64 on 32bit and u128 on 64bit)
> are returned in a register pair - but similar sized structures have to be
> returned by value.
No, they really don't. At least not on x86 and arm64 with our ABI.
Two-register structures get returned in registers too.
Try something like this:
struct a {
unsigned long val1, val2;
} function(void)
{ return (struct a) { 5, 100 }; }
and you'll see both gcc and clang generate
movl $5, %eax
movl $100, %edx
retq
(and you'll similar code on other architectures).
But it really is just that the two-register case is special.
Immediately when it grows past that size then yes, it ends up being
returned through indirect memory.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-21 18:34 ` David Laight
2025-02-21 19:12 ` Linus Torvalds
@ 2025-02-21 20:06 ` Jan Engelhardt
2025-02-21 20:23 ` Laurent Pinchart
2025-02-21 20:26 ` Linus Torvalds
1 sibling, 2 replies; 194+ messages in thread
From: Jan Engelhardt @ 2025-02-21 20:06 UTC (permalink / raw)
To: David Laight
Cc: H. Peter Anvin, Greg KH, Boqun Feng, Miguel Ojeda,
Christoph Hellwig, rust-for-linux, Linus Torvalds, David Airlie,
linux-kernel, ksummit
On Friday 2025-02-21 19:34, David Laight wrote:
>>
>> Returning aggregates in C++ is often implemented with a secret extra
>> pointer argument passed to the function. The C backend does not
>> perform that kind of transformation automatically. I surmise ABI reasons.
>
>Have you really looked at the generated code?
>For anything non-trivial if gets truly horrid.
>
>To pass a class by value the compiler has to call the C++ copy-operator to
>generate a deep copy prior to the call, and then call the destructor after
>the function returns - compare against passing a pointer to an existing
>item (and not letting it be written to).
And that is why people generally don't pass aggregates by value,
irrespective of the programming language.
>Returning a class member is probably worse and leads to nasty bugs.
>In general the called code will have to do a deep copy from the item
>being returned
People have thought of that already and you can just
`return std::move(a.b);`.
>Then you get code like:
> const char *foo = data.func().c_str();
>very easily written looks fine, but foo points to garbage.
Because foo is non-owning, and the only owner has gone out of scope.
You have to be wary of that.
>You can return a reference - that doesn't go out of scope.
That depends on the refererred item.
string &f() { string z; return z; }
is going to explode (despite returning a reference).
>(Apart from the fact that c++ makes it hard to ensure all the non-class
>members are initialised.)
struct stat x{};
struct stat x = {};
all of x's members (which are scalar and thus non-class) are
initialized. The second line even works in C.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-21 19:12 ` Linus Torvalds
@ 2025-02-21 20:07 ` comex
2025-02-21 21:45 ` David Laight
1 sibling, 0 replies; 194+ messages in thread
From: comex @ 2025-02-21 20:07 UTC (permalink / raw)
To: Linus Torvalds
Cc: David Laight, Jan Engelhardt, H. Peter Anvin, Greg KH, Boqun Feng,
Miguel Ojeda, Christoph Hellwig, rust-for-linux, David Airlie,
linux-kernel, ksummit
> On Feb 21, 2025, at 11:12 AM, Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
> On Fri, 21 Feb 2025 at 10:34, David Laight <david.laight.linux@gmail.com> wrote:
>>
>> As Linus said, most modern ABI pass short structures in one or two registers
>> (or stack slots).
>> But aggregate returns are always done by passing a hidden pointer argument.
>>
>> It is annoying that double-sized integers (u64 on 32bit and u128 on 64bit)
>> are returned in a register pair - but similar sized structures have to be
>> returned by value.
>
> No, they really don't. At least not on x86 and arm64 with our ABI.
> Two-register structures get returned in registers too.
This does happen on older ABIs though.
With default compiler flags, two-register structures get returned on the stack on 32-bit x86, 32-bit ARM, 32-bit MIPS, both 32- and 64-bit POWER (but not power64le), and 32-bit SPARC. On most of those, double-register-sized integers still get returned in registers.
I tested this with GCC and Clang on Compiler Explorer:
https://godbolt.org/z/xe43Wdo5h
Again, that’s with default compiler flags. On 32-bit x86, Linux passes -freg-struct-return which avoids this problem. But I don’t know whether or not there’s anything similar on other architectures. This could be easily answered by checking actual kernel binaries, but I didn’t :)
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-21 20:06 ` Jan Engelhardt
@ 2025-02-21 20:23 ` Laurent Pinchart
2025-02-21 20:24 ` Laurent Pinchart
2025-02-21 22:02 ` David Laight
2025-02-21 20:26 ` Linus Torvalds
1 sibling, 2 replies; 194+ messages in thread
From: Laurent Pinchart @ 2025-02-21 20:23 UTC (permalink / raw)
To: Jan Engelhardt
Cc: David Laight, H. Peter Anvin, Greg KH, Boqun Feng, Miguel Ojeda,
Christoph Hellwig, rust-for-linux, Linus Torvalds, David Airlie,
linux-kernel, ksummit
On Fri, Feb 21, 2025 at 09:06:14PM +0100, Jan Engelhardt wrote:
> On Friday 2025-02-21 19:34, David Laight wrote:
> >>
> >> Returning aggregates in C++ is often implemented with a secret extra
> >> pointer argument passed to the function. The C backend does not
> >> perform that kind of transformation automatically. I surmise ABI reasons.
> >
> > Have you really looked at the generated code?
> > For anything non-trivial if gets truly horrid.
> >
> > To pass a class by value the compiler has to call the C++ copy-operator to
> > generate a deep copy prior to the call, and then call the destructor after
> > the function returns - compare against passing a pointer to an existing
> > item (and not letting it be written to).
>
> And that is why people generally don't pass aggregates by value,
> irrespective of the programming language.
It's actually sometimes more efficient to pass aggregates by value.
Considering std::string for instance,
std::string global;
void setSomething(std::string s)
{
global = std::move(s);
}
void foo(int x)
{
std::string s = std::to_string(x);
setSomething(std::move(s));
}
Passing by value is the most efficient option. The backing storage for
the string is allocated once in foo(). If you instead did
std::string global;
void setSomething(const std::string &s)
{
global = s;
}
void foo(int x)
{
std::string s = std::to_string(x);
setSomething(s);
}
then the data would have to be copied when assigned global.
The std::string object itself needs to be copied in the first case of
course, but that doesn't require heap allocation. The best solution
depends on the type of aggregates you need to pass. It's one of the
reasons string handling is messy in C++, due to the need to interoperate
with zero-terminated strings, the optimal API convention depends on the
expected usage pattern in both callers and callees. std::string_view is
no silver bullet :-(
> > Returning a class member is probably worse and leads to nasty bugs.
> > In general the called code will have to do a deep copy from the item
> > being returned
>
> People have thought of that already and you can just
> `return std::move(a.b);`.
Doesn't that prevent NRVO (named return value optimization) in C++ ?
Starting in C++17, compilers are required to perform copy ellision.
> > Then you get code like:
> > const char *foo = data.func().c_str();
> > very easily written looks fine, but foo points to garbage.
>
> Because foo is non-owning, and the only owner has gone out of scope.
> You have to be wary of that.
>
> > You can return a reference - that doesn't go out of scope.
>
> That depends on the refererred item.
> string &f() { string z; return z; }
> is going to explode (despite returning a reference).
>
> > (Apart from the fact that c++ makes it hard to ensure all the non-class
> > members are initialised.)
>
> struct stat x{};
> struct stat x = {};
>
> all of x's members (which are scalar and thus non-class) are
> initialized. The second line even works in C.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-21 20:23 ` Laurent Pinchart
@ 2025-02-21 20:24 ` Laurent Pinchart
2025-02-21 22:02 ` David Laight
1 sibling, 0 replies; 194+ messages in thread
From: Laurent Pinchart @ 2025-02-21 20:24 UTC (permalink / raw)
To: Jan Engelhardt
Cc: David Laight, H. Peter Anvin, Greg KH, Boqun Feng, Miguel Ojeda,
Christoph Hellwig, rust-for-linux, Linus Torvalds, David Airlie,
linux-kernel, ksummit
On Fri, Feb 21, 2025 at 10:23:33PM +0200, Laurent Pinchart wrote:
> On Fri, Feb 21, 2025 at 09:06:14PM +0100, Jan Engelhardt wrote:
> > On Friday 2025-02-21 19:34, David Laight wrote:
> > >>
> > >> Returning aggregates in C++ is often implemented with a secret extra
> > >> pointer argument passed to the function. The C backend does not
> > >> perform that kind of transformation automatically. I surmise ABI reasons.
> > >
> > > Have you really looked at the generated code?
> > > For anything non-trivial if gets truly horrid.
> > >
> > > To pass a class by value the compiler has to call the C++ copy-operator to
> > > generate a deep copy prior to the call, and then call the destructor after
> > > the function returns - compare against passing a pointer to an existing
> > > item (and not letting it be written to).
> >
> > And that is why people generally don't pass aggregates by value,
> > irrespective of the programming language.
>
> It's actually sometimes more efficient to pass aggregates by value.
> Considering std::string for instance,
>
> std::string global;
>
> void setSomething(std::string s)
> {
> global = std::move(s);
> }
>
> void foo(int x)
> {
> std::string s = std::to_string(x);
>
> setSomething(std::move(s));
> }
>
> Passing by value is the most efficient option. The backing storage for
> the string is allocated once in foo(). If you instead did
>
> std::string global;
>
> void setSomething(const std::string &s)
> {
> global = s;
> }
>
> void foo(int x)
> {
> std::string s = std::to_string(x);
>
> setSomething(s);
> }
>
> then the data would have to be copied when assigned global.
>
> The std::string object itself needs to be copied in the first case of
> course, but that doesn't require heap allocation. The best solution
> depends on the type of aggregates you need to pass. It's one of the
> reasons string handling is messy in C++, due to the need to interoperate
> with zero-terminated strings, the optimal API convention depends on the
> expected usage pattern in both callers and callees. std::string_view is
> no silver bullet :-(
>
> > > Returning a class member is probably worse and leads to nasty bugs.
> > > In general the called code will have to do a deep copy from the item
> > > being returned
> >
> > People have thought of that already and you can just
> > `return std::move(a.b);`.
>
> Doesn't that prevent NRVO (named return value optimization) in C++ ?
> Starting in C++17, compilers are required to perform copy ellision.
Ah my bad, I missed the 'a.'. NRVO isn't possible.
> > > Then you get code like:
> > > const char *foo = data.func().c_str();
> > > very easily written looks fine, but foo points to garbage.
> >
> > Because foo is non-owning, and the only owner has gone out of scope.
> > You have to be wary of that.
> >
> > > You can return a reference - that doesn't go out of scope.
> >
> > That depends on the refererred item.
> > string &f() { string z; return z; }
> > is going to explode (despite returning a reference).
> >
> > > (Apart from the fact that c++ makes it hard to ensure all the non-class
> > > members are initialised.)
> >
> > struct stat x{};
> > struct stat x = {};
> >
> > all of x's members (which are scalar and thus non-class) are
> > initialized. The second line even works in C.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-21 20:06 ` Jan Engelhardt
2025-02-21 20:23 ` Laurent Pinchart
@ 2025-02-21 20:26 ` Linus Torvalds
1 sibling, 0 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-21 20:26 UTC (permalink / raw)
To: Jan Engelhardt
Cc: David Laight, H. Peter Anvin, Greg KH, Boqun Feng, Miguel Ojeda,
Christoph Hellwig, rust-for-linux, David Airlie, linux-kernel,
ksummit
On Fri, 21 Feb 2025 at 12:06, Jan Engelhardt <ej@inai.de> wrote:
>
> >(Apart from the fact that c++ makes it hard to ensure all the non-class
> >members are initialised.)
>
> struct stat x{};
> struct stat x = {};
>
> all of x's members (which are scalar and thus non-class) are
> initialized. The second line even works in C.
Sadly, it doesn't work very reliably.
Yes, if it's the empty initializer, the C standard afaik requires that
it clear everything.
But if you make the mistake of thinking that you want to initialize
one field to anything but zero, and instead do the initializer like
this:
struct stat x = { .field = 7 };
suddenly padding and various union members can be left uninitialized.
Gcc used to initialize it all, but as of gcc-15 it apparently says
"Oh, the standard allows this crazy behavior, so we'll do it by
default".
Yeah. People love to talk about "safe C", but compiler people have
actively tried to make C unsafer for decades. The C standards
committee has been complicit. I've ranted about the crazy C alias
rules before.
We (now) avoid this particular pitfall in the kernel with
-fzero-init-padding-bits=all
but outside of the kernel you may need to look out for this very
subtle odd rule.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-21 19:12 ` Linus Torvalds
2025-02-21 20:07 ` comex
@ 2025-02-21 21:45 ` David Laight
2025-02-22 6:32 ` Willy Tarreau
1 sibling, 1 reply; 194+ messages in thread
From: David Laight @ 2025-02-21 21:45 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jan Engelhardt, H. Peter Anvin, Greg KH, Boqun Feng, Miguel Ojeda,
Christoph Hellwig, rust-for-linux, David Airlie, linux-kernel,
ksummit
On Fri, 21 Feb 2025 11:12:27 -0800
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Fri, 21 Feb 2025 at 10:34, David Laight <david.laight.linux@gmail.com> wrote:
> >
> > As Linus said, most modern ABI pass short structures in one or two registers
> > (or stack slots).
> > But aggregate returns are always done by passing a hidden pointer argument.
> >
> > It is annoying that double-sized integers (u64 on 32bit and u128 on 64bit)
> > are returned in a register pair - but similar sized structures have to be
> > returned by value.
>
> No, they really don't. At least not on x86 and arm64 with our ABI.
> Two-register structures get returned in registers too.
>
> Try something like this:
>
> struct a {
> unsigned long val1, val2;
> } function(void)
> { return (struct a) { 5, 100 }; }
>
> and you'll see both gcc and clang generate
>
> movl $5, %eax
> movl $100, %edx
> retq
>
> (and you'll similar code on other architectures).
Humbug, I'm sure it didn't do that the last time I tried it.
David
>
> But it really is just that the two-register case is special.
> Immediately when it grows past that size then yes, it ends up being
> returned through indirect memory.
>
> Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-21 20:23 ` Laurent Pinchart
2025-02-21 20:24 ` Laurent Pinchart
@ 2025-02-21 22:02 ` David Laight
2025-02-21 22:13 ` Bart Van Assche
1 sibling, 1 reply; 194+ messages in thread
From: David Laight @ 2025-02-21 22:02 UTC (permalink / raw)
To: Laurent Pinchart
Cc: Jan Engelhardt, H. Peter Anvin, Greg KH, Boqun Feng, Miguel Ojeda,
Christoph Hellwig, rust-for-linux, Linus Torvalds, David Airlie,
linux-kernel, ksummit
On Fri, 21 Feb 2025 22:23:32 +0200
Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:
> On Fri, Feb 21, 2025 at 09:06:14PM +0100, Jan Engelhardt wrote:
> > On Friday 2025-02-21 19:34, David Laight wrote:
> > >>
> > >> Returning aggregates in C++ is often implemented with a secret extra
> > >> pointer argument passed to the function. The C backend does not
> > >> perform that kind of transformation automatically. I surmise ABI reasons.
> > >
> > > Have you really looked at the generated code?
> > > For anything non-trivial if gets truly horrid.
> > >
> > > To pass a class by value the compiler has to call the C++ copy-operator to
> > > generate a deep copy prior to the call, and then call the destructor after
> > > the function returns - compare against passing a pointer to an existing
> > > item (and not letting it be written to).
> >
> > And that is why people generally don't pass aggregates by value,
> > irrespective of the programming language.
>
> It's actually sometimes more efficient to pass aggregates by value.
> Considering std::string for instance,
>
> std::string global;
>
> void setSomething(std::string s)
> {
> global = std::move(s);
> }
>
> void foo(int x)
> {
> std::string s = std::to_string(x);
>
> setSomething(std::move(s));
> }
>
> Passing by value is the most efficient option. The backing storage for
> the string is allocated once in foo(). If you instead did
>
> std::string global;
>
> void setSomething(const std::string &s)
> {
> global = s;
> }
>
> void foo(int x)
> {
> std::string s = std::to_string(x);
>
> setSomething(s);
> }
>
> then the data would have to be copied when assigned global.
>
> The std::string object itself needs to be copied in the first case of
> course, but that doesn't require heap allocation.
It is still a copy though.
And there is nothing to stop (I think even std::string) using ref-counted
buffers for large malloc()ed strings.
And, even without it, you just need access to the operator that 'moves'
the actual char data from one std::string to another.
Since that is all you are relying on.
You can then pass the std::string themselves by reference.
Although I can't remember if you can assign different allocators to
different std::string - I'm not really a C++ expert.
> The best solution
> depends on the type of aggregates you need to pass. It's one of the
> reasons string handling is messy in C++, due to the need to interoperate
> with zero-terminated strings, the optimal API convention depends on the
> expected usage pattern in both callers and callees. std::string_view is
> no silver bullet :-(
The only thing the zero-termination stops is generating sub-strings by
reference.
The bigger problem is that a C function is allowed to advance a pointer
along the array. So str.c_str() is just &str[0].
That stops any form of fragmented strings - which might be useful for
large ones, even though the cost of the accesses may well balloon.
The same is true for std::vector - it has to be implemented using realloc().
So lots of pushback() of non-trival classes gets very, very slow.
and it is what people tend to write.
David
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-21 22:02 ` David Laight
@ 2025-02-21 22:13 ` Bart Van Assche
2025-02-22 5:56 ` comex
0 siblings, 1 reply; 194+ messages in thread
From: Bart Van Assche @ 2025-02-21 22:13 UTC (permalink / raw)
To: David Laight, Laurent Pinchart
Cc: Jan Engelhardt, H. Peter Anvin, Greg KH, Boqun Feng, Miguel Ojeda,
Christoph Hellwig, rust-for-linux, Linus Torvalds, David Airlie,
linux-kernel, ksummit
On 2/21/25 2:02 PM, David Laight wrote:
> And there is nothing to stop (I think even std::string) using ref-counted
> buffers for large malloc()ed strings.
This is what an LLM told me about this topic (this matches what I
remember about the std::string implementation):
<quote>
Does the std::string implementation use a reference count?
No. [ ... ]
Why does std::string not use a reference count? Has this always been the
case?
[ ... ]
Reference counting adds overhead. Every time a string is copied or
assigned, the reference count has to be incremented or decremented, and
when it reaches zero, memory has to be deallocated. This adds both time
complexity (due to the need to update the reference count) and space
complexity (to store the count alongside the string data).
The goal with std::string is to minimize this overhead as much as
possible for the most common cases, particularly short strings, which
are frequent in real-world applications. The small string optimization
(SSO) allows short strings to be stored directly within the std::string
object itself, avoiding heap allocation and reference counting
altogether. For long strings, reference counting might not provide much
of an advantage anyway because memory management would still have to
involve the heap.
[ ... ]
Reference counting introduces unpredictable performance in terms of
memory management, especially in multithreaded applications. Each string
operation might require atomic operations on the reference count,
leading to potential contention in multithreaded environments.
[ ... ]
Initially, early implementations of std::string may have used CoW or
reference counting techniques. However, over time, as the language
evolved and as multithreading and performance became more of a priority,
the C++ standard moved away from these features. Notably, the C++11
standard explicitly banned CoW for std::string in order to avoid its
pitfalls.
[ ... ]
</quote>
Bart.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-21 22:13 ` Bart Van Assche
@ 2025-02-22 5:56 ` comex
0 siblings, 0 replies; 194+ messages in thread
From: comex @ 2025-02-22 5:56 UTC (permalink / raw)
To: Bart Van Assche
Cc: David Laight, Laurent Pinchart, Jan Engelhardt, H. Peter Anvin,
Greg KH, Boqun Feng, Miguel Ojeda, Christoph Hellwig,
rust-for-linux, Linus Torvalds, David Airlie, linux-kernel,
ksummit
> On Feb 21, 2025, at 2:13 PM, Bart Van Assche <bvanassche@acm.org> wrote:
>
> Initially, early implementations of std::string may have used CoW or reference counting techniques.
More accurately, you can’t have one without the other. std::string is mutable, so reference counting requires copy-on-write (and of course copy-on-write wouldn’t make sense without multiple references).
> Notably, the C++11 standard explicitly banned CoW for std::string in order to avoid its pitfalls.
> [ ... ]
The C++11 spec doesn’t explicitly say ‘thou shalt not copy-on-write’, but it requires std::string's operator[] to be O(1), which effectively bans it because copying is O(n).
Which forced libstdc++ to break their ABI, since they were using copy-on-write before.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-21 21:45 ` David Laight
@ 2025-02-22 6:32 ` Willy Tarreau
2025-02-22 6:37 ` Willy Tarreau
0 siblings, 1 reply; 194+ messages in thread
From: Willy Tarreau @ 2025-02-22 6:32 UTC (permalink / raw)
To: David Laight
Cc: Linus Torvalds, Jan Engelhardt, H. Peter Anvin, Greg KH,
Boqun Feng, Miguel Ojeda, Christoph Hellwig, rust-for-linux,
David Airlie, linux-kernel, ksummit
On Fri, Feb 21, 2025 at 09:45:01PM +0000, David Laight wrote:
> On Fri, 21 Feb 2025 11:12:27 -0800
> Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
> > On Fri, 21 Feb 2025 at 10:34, David Laight <david.laight.linux@gmail.com> wrote:
> > >
> > > As Linus said, most modern ABI pass short structures in one or two registers
> > > (or stack slots).
> > > But aggregate returns are always done by passing a hidden pointer argument.
> > >
> > > It is annoying that double-sized integers (u64 on 32bit and u128 on 64bit)
> > > are returned in a register pair - but similar sized structures have to be
> > > returned by value.
> >
> > No, they really don't. At least not on x86 and arm64 with our ABI.
> > Two-register structures get returned in registers too.
> >
> > Try something like this:
> >
> > struct a {
> > unsigned long val1, val2;
> > } function(void)
> > { return (struct a) { 5, 100 }; }
> >
> > and you'll see both gcc and clang generate
> >
> > movl $5, %eax
> > movl $100, %edx
> > retq
> >
> > (and you'll similar code on other architectures).
>
> Humbug, I'm sure it didn't do that the last time I tried it.
You have not dreamed, most likely last time you tried it was on
a 32-bit arch like i386 or ARM. Gcc doesn't do that there, most
likely due to historic reasons that couldn't be changed later,
it passes a pointer argument to write the data there:
00000000 <fct>:
0: 8b 44 24 04 mov 0x4(%esp),%eax
4: c7 00 05 00 00 00 movl $0x5,(%eax)
a: c7 40 04 64 00 00 00 movl $0x64,0x4(%eax)
11: c2 04 00 ret $0x4
You can improve it slightly with -mregparm but that's all,
and I never found an option nor attribute to change that:
00000000 <fct>:
0: c7 00 05 00 00 00 movl $0x5,(%eax)
6: c7 40 04 64 00 00 00 movl $0x64,0x4(%eax)
d: c3 ret
ARM does the same on 32 bits:
00000000 <fct>:
0: 2105 movs r1, #5
2: 2264 movs r2, #100 ; 0x64
4: e9c0 1200 strd r1, r2, [r0]
8: 4770 bx lr
I think it's simply that this practice arrived long after these old
architectures were fairly common and it was too late to change their
ABI. But x86_64 and aarch64 had the opportunity to benefit from this.
For example, gcc-3.4 on x86_64 already does the right thing:
0000000000000000 <fct>:
0: ba 64 00 00 00 mov $0x64,%edx
5: b8 05 00 00 00 mov $0x5,%eax
a: c3 retq
So does aarch64 since the oldest gcc I have that supports it (linaro 4.7):
0000000000000000 <fct>:
0: d28000a0 mov x0, #0x5 // #5
4: d2800c81 mov x1, #0x64 // #100
8: d65f03c0 ret
For my use cases I consider that older architectures are not favored but
they are not degraded either, while newer ones do significantly benefit
from the approach, that's why I'm using it extensively.
Quite frankly, there's no reason to avoid using this for pairs of pointers
or (status,value) pairs or coordinates etc. And if you absolutely need to
also support 32-bit archs optimally, you can do it using a macro to turn
your structs to a larger register and back:
struct a {
unsigned long v1, v2;
};
#define MKPAIR(x) (((unsigned long long)(x.v1) << 32) | (x.v2))
#define GETPAIR(x) ({ unsigned long long _x = x; (struct a){ .v1 = (_x >> 32), .v2 = (_x)}; })
unsigned long long fct(void)
{
struct a a = { 5, 100 };
return MKPAIR(a);
}
long caller(void)
{
struct a a = GETPAIR(fct());
return a.v1 + a.v2;
}
00000000 <fct>:
0: b8 64 00 00 00 mov $0x64,%eax
5: ba 05 00 00 00 mov $0x5,%edx
a: c3 ret
0000000b <caller>:
b: b8 69 00 00 00 mov $0x69,%eax
10: c3 ret
But quite frankly due to their relevance these days I don't think it's
worth the effort.
Hoping this helps,
Willy
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 6:32 ` Willy Tarreau
@ 2025-02-22 6:37 ` Willy Tarreau
2025-02-22 8:41 ` David Laight
0 siblings, 1 reply; 194+ messages in thread
From: Willy Tarreau @ 2025-02-22 6:37 UTC (permalink / raw)
To: David Laight
Cc: Linus Torvalds, Jan Engelhardt, H. Peter Anvin, Greg KH,
Boqun Feng, Miguel Ojeda, Christoph Hellwig, rust-for-linux,
David Airlie, linux-kernel, ksummit
On Sat, Feb 22, 2025 at 07:32:10AM +0100, Willy Tarreau wrote:
> On Fri, Feb 21, 2025 at 09:45:01PM +0000, David Laight wrote:
> > On Fri, 21 Feb 2025 11:12:27 -0800
> > Linus Torvalds <torvalds@linux-foundation.org> wrote:
> >
> > > On Fri, 21 Feb 2025 at 10:34, David Laight <david.laight.linux@gmail.com> wrote:
> > > >
> > > > As Linus said, most modern ABI pass short structures in one or two registers
> > > > (or stack slots).
> > > > But aggregate returns are always done by passing a hidden pointer argument.
> > > >
> > > > It is annoying that double-sized integers (u64 on 32bit and u128 on 64bit)
> > > > are returned in a register pair - but similar sized structures have to be
> > > > returned by value.
> > >
> > > No, they really don't. At least not on x86 and arm64 with our ABI.
> > > Two-register structures get returned in registers too.
> > >
> > > Try something like this:
> > >
> > > struct a {
> > > unsigned long val1, val2;
> > > } function(void)
> > > { return (struct a) { 5, 100 }; }
> > >
> > > and you'll see both gcc and clang generate
> > >
> > > movl $5, %eax
> > > movl $100, %edx
> > > retq
> > >
> > > (and you'll similar code on other architectures).
> >
> > Humbug, I'm sure it didn't do that the last time I tried it.
>
> You have not dreamed, most likely last time you tried it was on
> a 32-bit arch like i386 or ARM. Gcc doesn't do that there, most
> likely due to historic reasons that couldn't be changed later,
> it passes a pointer argument to write the data there:
>
> 00000000 <fct>:
> 0: 8b 44 24 04 mov 0x4(%esp),%eax
> 4: c7 00 05 00 00 00 movl $0x5,(%eax)
> a: c7 40 04 64 00 00 00 movl $0x64,0x4(%eax)
> 11: c2 04 00 ret $0x4
>
> You can improve it slightly with -mregparm but that's all,
> and I never found an option nor attribute to change that:
>
> 00000000 <fct>:
> 0: c7 00 05 00 00 00 movl $0x5,(%eax)
> 6: c7 40 04 64 00 00 00 movl $0x64,0x4(%eax)
> d: c3 ret
>
> ARM does the same on 32 bits:
>
> 00000000 <fct>:
> 0: 2105 movs r1, #5
> 2: 2264 movs r2, #100 ; 0x64
> 4: e9c0 1200 strd r1, r2, [r0]
> 8: 4770 bx lr
>
> I think it's simply that this practice arrived long after these old
> architectures were fairly common and it was too late to change their
> ABI. But x86_64 and aarch64 had the opportunity to benefit from this.
> For example, gcc-3.4 on x86_64 already does the right thing:
>
> 0000000000000000 <fct>:
> 0: ba 64 00 00 00 mov $0x64,%edx
> 5: b8 05 00 00 00 mov $0x5,%eax
> a: c3 retq
>
> So does aarch64 since the oldest gcc I have that supports it (linaro 4.7):
>
> 0000000000000000 <fct>:
> 0: d28000a0 mov x0, #0x5 // #5
> 4: d2800c81 mov x1, #0x64 // #100
> 8: d65f03c0 ret
>
> For my use cases I consider that older architectures are not favored but
> they are not degraded either, while newer ones do significantly benefit
> from the approach, that's why I'm using it extensively.
>
> Quite frankly, there's no reason to avoid using this for pairs of pointers
> or (status,value) pairs or coordinates etc. And if you absolutely need to
> also support 32-bit archs optimally, you can do it using a macro to turn
> your structs to a larger register and back:
>
> struct a {
> unsigned long v1, v2;
> };
>
> #define MKPAIR(x) (((unsigned long long)(x.v1) << 32) | (x.v2))
> #define GETPAIR(x) ({ unsigned long long _x = x; (struct a){ .v1 = (_x >> 32), .v2 = (_x)}; })
>
> unsigned long long fct(void)
> {
> struct a a = { 5, 100 };
> return MKPAIR(a);
> }
>
> long caller(void)
> {
> struct a a = GETPAIR(fct());
> return a.v1 + a.v2;
> }
>
> 00000000 <fct>:
> 0: b8 64 00 00 00 mov $0x64,%eax
> 5: ba 05 00 00 00 mov $0x5,%edx
> a: c3 ret
>
> 0000000b <caller>:
> b: b8 69 00 00 00 mov $0x69,%eax
> 10: c3 ret
>
> But quite frankly due to their relevance these days I don't think it's
> worth the effort.
Update: I found in my code a comment suggesting that it works when using
-freg-struct (which is in fact -freg-struct-return) which works both on
i386 and ARM. I just didn't remember about this and couldn't find it when
looking at gcc docs.
Willy
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 6:37 ` Willy Tarreau
@ 2025-02-22 8:41 ` David Laight
2025-02-22 9:11 ` Willy Tarreau
0 siblings, 1 reply; 194+ messages in thread
From: David Laight @ 2025-02-22 8:41 UTC (permalink / raw)
To: Willy Tarreau
Cc: Linus Torvalds, Jan Engelhardt, H. Peter Anvin, Greg KH,
Boqun Feng, Miguel Ojeda, Christoph Hellwig, rust-for-linux,
David Airlie, linux-kernel, ksummit
On Sat, 22 Feb 2025 07:37:30 +0100
Willy Tarreau <w@1wt.eu> wrote:
...
> Update: I found in my code a comment suggesting that it works when using
> -freg-struct (which is in fact -freg-struct-return) which works both on
> i386 and ARM.
The problem is that you need it to be an __attribute__(()) so it can
be per-function without breaking ABI.
> I just didn't remember about this and couldn't find it when
> looking at gcc docs.
I can never find anything in there either.
And then I wish they say when it was introduced.
David
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 8:41 ` David Laight
@ 2025-02-22 9:11 ` Willy Tarreau
0 siblings, 0 replies; 194+ messages in thread
From: Willy Tarreau @ 2025-02-22 9:11 UTC (permalink / raw)
To: David Laight
Cc: Linus Torvalds, Jan Engelhardt, H. Peter Anvin, Greg KH,
Boqun Feng, Miguel Ojeda, Christoph Hellwig, rust-for-linux,
David Airlie, linux-kernel, ksummit
On Sat, Feb 22, 2025 at 08:41:12AM +0000, David Laight wrote:
> On Sat, 22 Feb 2025 07:37:30 +0100
> Willy Tarreau <w@1wt.eu> wrote:
>
> ...
> > Update: I found in my code a comment suggesting that it works when using
> > -freg-struct (which is in fact -freg-struct-return) which works both on
> > i386 and ARM.
>
> The problem is that you need it to be an __attribute__(()) so it can
> be per-function without breaking ABI.
Yes I agree that it would be better.
> > I just didn't remember about this and couldn't find it when
> > looking at gcc docs.
>
> I can never find anything in there either.
> And then I wish they say when it was introduced.
Same here. At least on gcc-2.95 it was already recognized and worked fine:
- without:
fct:
movl 4(%esp),%ecx
movl $5,%eax
movl $100,%edx
movl %eax,(%ecx)
movl %edx,4(%ecx)
movl %ecx,%eax
ret $4
- with:
fct:
movl $5,%eax
movl $100,%edx
ret
Willy
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
@ 2025-02-22 10:06 Ventura Jack
2025-02-22 14:15 ` Gary Guo
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-22 10:06 UTC (permalink / raw)
To: torvalds
Cc: airlied, boqun.feng, david.laight.linux, ej, gregkh, hch, hpa,
ksummit, linux-kernel, miguel.ojeda.sandonis, rust-for-linux
>Gcc used to initialize it all, but as of gcc-15 it apparently says
>"Oh, the standard allows this crazy behavior, so we'll do it by
default".
>
>Yeah. People love to talk about "safe C", but compiler people have
>actively tried to make C unsafer for decades. The C standards
>committee has been complicit. I've ranted about the crazy C alias
>rules before.
Unsafe Rust actually has way stricter rules for aliasing than C. For
you and others who don't like C's aliasing, it may be best to avoid
unsafe Rust.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 10:06 C aggregate passing (Rust kernel policy) Ventura Jack
@ 2025-02-22 14:15 ` Gary Guo
2025-02-22 15:03 ` Ventura Jack
0 siblings, 1 reply; 194+ messages in thread
From: Gary Guo @ 2025-02-22 14:15 UTC (permalink / raw)
To: Ventura Jack
Cc: torvalds, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, hpa, ksummit, linux-kernel, miguel.ojeda.sandonis,
rust-for-linux
On Sat, 22 Feb 2025 03:06:44 -0700
Ventura Jack <venturajack85@gmail.com> wrote:
> >Gcc used to initialize it all, but as of gcc-15 it apparently says
> >"Oh, the standard allows this crazy behavior, so we'll do it by
> default".
> >
> >Yeah. People love to talk about "safe C", but compiler people have
> >actively tried to make C unsafer for decades. The C standards
> >committee has been complicit. I've ranted about the crazy C alias
> >rules before.
>
> Unsafe Rust actually has way stricter rules for aliasing than C. For
> you and others who don't like C's aliasing, it may be best to avoid
> unsafe Rust.
>
I think the frequently criticized C aliasing rules are *type-based
aliasing*. Rust does not have type based aliasing restrictions.
It does have mutability based aliasing rules, but that's easier to
reason about, and we have mechanisms to disable them if needed at much
finer granularity.
Best,
Gary
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 14:15 ` Gary Guo
@ 2025-02-22 15:03 ` Ventura Jack
2025-02-22 18:54 ` Kent Overstreet
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-22 15:03 UTC (permalink / raw)
To: Gary Guo
Cc: torvalds, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, hpa, ksummit, linux-kernel, miguel.ojeda.sandonis,
rust-for-linux
On Sat, Feb 22, 2025 at 7:15 AM Gary Guo <gary@garyguo.net> wrote:
>
> On Sat, 22 Feb 2025 03:06:44 -0700
> Ventura Jack <venturajack85@gmail.com> wrote:
>
> > >Gcc used to initialize it all, but as of gcc-15 it apparently says
> > >"Oh, the standard allows this crazy behavior, so we'll do it by
> > default".
> > >
> > >Yeah. People love to talk about "safe C", but compiler people have
> > >actively tried to make C unsafer for decades. The C standards
> > >committee has been complicit. I've ranted about the crazy C alias
> > >rules before.
> >
> > Unsafe Rust actually has way stricter rules for aliasing than C. For
> > you and others who don't like C's aliasing, it may be best to avoid
> > unsafe Rust.
> >
>
> I think the frequently criticized C aliasing rules are *type-based
> aliasing*. Rust does not have type based aliasing restrictions.
>
> It does have mutability based aliasing rules, but that's easier to
> reason about, and we have mechanisms to disable them if needed at much
> finer granularity.
>
> Best,
> Gary
Are you sure that unsafe Rust has easier to reason about aliasing
rules? Last I checked, there are two different models related to
aliasing, tree borrows and stacked borrows, both at an experimental
research stage. And the rules for aliasing in unsafe Rust are not yet
fully defined. https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
has some commentary on the aliasing rules.
From the blog post:
>The aliasing rules in Rust are not fully defined.
Other blog posts and videos have likewise described unsafe Rust as
being harder than C to reason about and get correct, explicitly
mentioning the aliasing rules of unsafe Rust as being one reason
unsafe Rust is harder than C.
One trade-off then being that unsafe Rust is not all of Rust, unlike C
that currently has no such UB safe-unsafe split. And so you only need
to understand the unsafe Rust aliasing rules when working with unsafe
Rust. And can ignore them when working with safe Rust.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 15:03 ` Ventura Jack
@ 2025-02-22 18:54 ` Kent Overstreet
2025-02-22 19:18 ` Linus Torvalds
2025-02-22 19:41 ` Miguel Ojeda
0 siblings, 2 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-22 18:54 UTC (permalink / raw)
To: Ventura Jack
Cc: Gary Guo, torvalds, airlied, boqun.feng, david.laight.linux, ej,
gregkh, hch, hpa, ksummit, linux-kernel, miguel.ojeda.sandonis,
rust-for-linux
On Sat, Feb 22, 2025 at 08:03:29AM -0700, Ventura Jack wrote:
> On Sat, Feb 22, 2025 at 7:15 AM Gary Guo <gary@garyguo.net> wrote:
> >
> > On Sat, 22 Feb 2025 03:06:44 -0700
> > Ventura Jack <venturajack85@gmail.com> wrote:
> >
> > > >Gcc used to initialize it all, but as of gcc-15 it apparently says
> > > >"Oh, the standard allows this crazy behavior, so we'll do it by
> > > default".
> > > >
> > > >Yeah. People love to talk about "safe C", but compiler people have
> > > >actively tried to make C unsafer for decades. The C standards
> > > >committee has been complicit. I've ranted about the crazy C alias
> > > >rules before.
> > >
> > > Unsafe Rust actually has way stricter rules for aliasing than C. For
> > > you and others who don't like C's aliasing, it may be best to avoid
> > > unsafe Rust.
> > >
> >
> > I think the frequently criticized C aliasing rules are *type-based
> > aliasing*. Rust does not have type based aliasing restrictions.
> >
> > It does have mutability based aliasing rules, but that's easier to
> > reason about, and we have mechanisms to disable them if needed at much
> > finer granularity.
> >
> > Best,
> > Gary
>
> Are you sure that unsafe Rust has easier to reason about aliasing
> rules? Last I checked, there are two different models related to
> aliasing, tree borrows and stacked borrows, both at an experimental
> research stage. And the rules for aliasing in unsafe Rust are not yet
> fully defined. https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
> has some commentary on the aliasing rules.
>
> From the blog post:
> >The aliasing rules in Rust are not fully defined.
>
> Other blog posts and videos have likewise described unsafe Rust as
> being harder than C to reason about and get correct, explicitly
> mentioning the aliasing rules of unsafe Rust as being one reason
> unsafe Rust is harder than C.
I believe (Miguel was talking about this at one of the conferences,
maybe he'll chime in) that there was work in progress to solidify the
aliasing and ownership rules at the unsafe level, but it sounded like it
may have still been an area of research.
If that work is successful it could lead to significant improvements in
code generation, since aliasing causes a lot of unnecessary spills and
reloads - VLIW could finally become practical.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 18:54 ` Kent Overstreet
@ 2025-02-22 19:18 ` Linus Torvalds
2025-02-22 20:00 ` Kent Overstreet
2025-02-23 15:30 ` Ventura Jack
2025-02-22 19:41 ` Miguel Ojeda
1 sibling, 2 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-22 19:18 UTC (permalink / raw)
To: Kent Overstreet
Cc: Ventura Jack, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sat, 22 Feb 2025 at 10:54, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>
> If that work is successful it could lead to significant improvements in
> code generation, since aliasing causes a lot of unnecessary spills and
> reloads - VLIW could finally become practical.
No.
Compiler people think aliasing matters. It very seldom does. And VLIW
will never become practical for entirely unrelated reasons (read: OoO
is fundamentally superior to VLIW in general purpose computing).
Aliasing is one of those bug-bears where compiler people can make
trivial code optimizations that look really impressive. So compiler
people *love* having simplistic aliasing rules that don't require real
analysis, because the real analysis is hard (not just expensive, but
basically unsolvable).
And they matter mainly on bad CPUs and HPC-style loads, or on trivial
example code. And for vectorization.
And the sane model for those was to just have the HPC people say what
the aliasing rules were (ie the C "restrict" keyword), but because it
turns out that nobody wants to use that, and because one of the main
targets was HPC where there was a very clear type distinction between
integer indexes and floating point arrays, some "clever" person
thought "why don't we use that obvious distinction to say that things
don't alias". Because then you didn't have to add "restrict" modifiers
to your compiler benchmarks, you could just use the existing syntax
("double *").
And so they made everything worse for everybody else, because it made
C HPC code run as fast as the old Fortran code, and the people who
cared about DGEMM and BLAS were happy. And since that was how you
defined supercomputer speeds (before AI), that largely pointless
benchmark was a BigDeal(tm).
End result: if you actually care about HPC and vectorization, just use
'restrict'. If you want to make it better (because 'restrict'
certainly isn't perfect either), extend on the concept. Don't make
things worse for everybody else by introducing stupid language rules
that are fundamentally based on "the compiler can generate code better
by relying on undefined behavior".
The C standards body has been much too eager to embrace "undefined behavior".
In original C, it was almost entirely about either hardware
implementation issues or about "you got your pointer arithetic wrong,
and the source code is undefined, so the result is undefined".
Together with some (very unfortunate) order of operations and sequence
point issues.
But instead of trying to tighten that up (which *has* happened: the
sequence point rules _have_ actually become better!) and turning the
language into a more reliable one by making for _fewer_ undefined or
platform-defined things, many C language features have been about
extending on the list of undefined behaviors.
The kernel basically turns all that off, as much as possible. Overflow
isn't undefined in the kernel. Aliasing isn't undefined in the kernel.
Things like that.
And making the rules stricter makes almost no difference for code
generation in practice. Really. The arguments for the garbage that is
integer overflow or 'strict aliasing' in C were always just wrong.
When 'integer overflow' means that you can _sometimes_ remove one
single ALU operation in *some* loops, but the cost of it is that you
potentially introduced some seriously subtle security bugs, I think we
know it was the wrong thing to do.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 18:54 ` Kent Overstreet
2025-02-22 19:18 ` Linus Torvalds
@ 2025-02-22 19:41 ` Miguel Ojeda
2025-02-22 20:49 ` Kent Overstreet
1 sibling, 1 reply; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-22 19:41 UTC (permalink / raw)
To: Kent Overstreet
Cc: Ventura Jack, Gary Guo, torvalds, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
rust-for-linux, Ralf Jung
On Sat, Feb 22, 2025 at 7:54 PM Kent Overstreet
<kent.overstreet@linux.dev> wrote:
>
> I believe (Miguel was talking about this at one of the conferences,
> maybe he'll chime in) that there was work in progress to solidify the
> aliasing and ownership rules at the unsafe level, but it sounded like it
> may have still been an area of research.
Not sure what I said, but Cc'ing Ralf in case he has time and wants to
share something on this (thanks in advance!).
From a quick look, Tree Borrows was submitted for publication back in November:
https://jhostert.de/assets/pdf/papers/villani2024trees.pdf
https://perso.crans.org/vanille/treebor/
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 19:18 ` Linus Torvalds
@ 2025-02-22 20:00 ` Kent Overstreet
2025-02-22 20:54 ` H. Peter Anvin
2025-02-23 15:30 ` Ventura Jack
1 sibling, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-22 20:00 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ventura Jack, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sat, Feb 22, 2025 at 11:18:33AM -0800, Linus Torvalds wrote:
> On Sat, 22 Feb 2025 at 10:54, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> >
> > If that work is successful it could lead to significant improvements in
> > code generation, since aliasing causes a lot of unnecessary spills and
> > reloads - VLIW could finally become practical.
>
> No.
>
> Compiler people think aliasing matters. It very seldom does. And VLIW
> will never become practical for entirely unrelated reasons (read: OoO
> is fundamentally superior to VLIW in general purpose computing).
OoO and VLIW are orthogonal, not exclusive, and we always want to go
wider, if we can. Separately, neverending gift that is Spectre should be
making everyone reconsider how reliant we've become on OoO.
We'll never get rid of OoO, I agree on that point. But I think it's
worth some thought experiments about how many branches actually need to
be there vs. how many are there because everyone's assumed "branches are
cheap! (so it's totally fine if the CPU sucks at the alternatives)" on
both the hardware and software side.
e.g. cmov historically sucked (and may still, I don't know), but a _lot_
of branches should just be dumb ALU ops. I wince at a lot of the
assembly I see gcc generate for e.g. short multiword integer
comparisons, there are a ton of places where it'll emit 3 or 5 branches
where 1 is all you need if we had better ALU primitives.
> Aliasing is one of those bug-bears where compiler people can make
> trivial code optimizations that look really impressive. So compiler
> people *love* having simplistic aliasing rules that don't require real
> analysis, because the real analysis is hard (not just expensive, but
> basically unsolvable).
I don't think crazy compiler experiments from crazy C people have much
relevance, here. I'm talking about if/when Rust is able to get this
right.
> The C standards body has been much too eager to embrace "undefined behavior".
Agree on C, but for the rest I think you're just failing to imagine what
we could have if everything wasn't tied to a language with
broken/missing semantics w.r.t. aliasing.
Yes, C will never get a memory model that gets rid of the spills and
reloads. But Rust just might. It's got the right model at the reference
level, we just need to see if they can push that down to raw pointers in
unsafe code.
But consider what the world would look like if Rust fixes aliasing and
we get a microarchitecture that's able to take advantage of it. Do a
microarchitecture that focuses some on ALU ops to get rid of as many
branches as possible (e.g. min/max, all your range checks that don't
trap), get rid of loads and spills from aliasing so you're primarily
running out of registers - and now you _do_ have enough instructions in
a basic block, with fixed latency, that you can schedule at compile time
to make VLIW worth it.
I don't think it's that big of a leap. Lack of cooperation between
hardware and compiler folks (and the fact that what the hardware people
wanted was impossible at the time) was what killed Itanium, so if you
fix those two things...
> The kernel basically turns all that off, as much as possible. Overflow
> isn't undefined in the kernel. Aliasing isn't undefined in the kernel.
> Things like that.
Yeah, the religion of undefined behaviour in C has been an absolute
nightmare.
It's not just the compiler folks though, that way of thinking has
infected entirely too many people people in kernel and userspace -
"performance is the holy grail and all that matters and thou shalt shave
every single damn instruction".
Where this really comes up for me is assertions, because we're not
giving great guidance there. It's always better to hit an assertion than
walk off into undefined behaviour la la land, but people see "thou shalt
not crash the kernel" as a reason not to use BUG_ON() when it _should_
just mean "always handle the error if you can't prove that it can't
happen".
> When 'integer overflow' means that you can _sometimes_ remove one
> single ALU operation in *some* loops, but the cost of it is that you
> potentially introduced some seriously subtle security bugs, I think we
> know it was the wrong thing to do.
And those branches just _do not matter_ in practice, since if one side
leads to a trap they're perfectly predicted and to a first approximation
we're always bottlenecked on memory.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 19:41 ` Miguel Ojeda
@ 2025-02-22 20:49 ` Kent Overstreet
2025-02-26 11:34 ` Ralf Jung
0 siblings, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-22 20:49 UTC (permalink / raw)
To: Miguel Ojeda
Cc: Ventura Jack, Gary Guo, torvalds, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
rust-for-linux, Ralf Jung
On Sat, Feb 22, 2025 at 08:41:52PM +0100, Miguel Ojeda wrote:
> On Sat, Feb 22, 2025 at 7:54 PM Kent Overstreet
> <kent.overstreet@linux.dev> wrote:
> >
> > I believe (Miguel was talking about this at one of the conferences,
> > maybe he'll chime in) that there was work in progress to solidify the
> > aliasing and ownership rules at the unsafe level, but it sounded like it
> > may have still been an area of research.
>
> Not sure what I said, but Cc'ing Ralf in case he has time and wants to
> share something on this (thanks in advance!).
Yeah, this looks like just the thing. At the conference you were talking
more about memory provenance in C, if memory serves there was cross
pollination going on between the C and Rust folks - did anything come of
the C side?
>
> From a quick look, Tree Borrows was submitted for publication back in November:
>
> https://jhostert.de/assets/pdf/papers/villani2024trees.pdf
> https://perso.crans.org/vanille/treebor/
That's it.
This looks fantastic, much further along than the last time I looked.
The only question I'm trying to answer is whether it's been pushed far
enough into llvm for the optimization opportunities to be realized - I'd
quite like to take a look at some generated code.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 20:00 ` Kent Overstreet
@ 2025-02-22 20:54 ` H. Peter Anvin
2025-02-22 21:22 ` Kent Overstreet
2025-02-22 21:22 ` Linus Torvalds
0 siblings, 2 replies; 194+ messages in thread
From: H. Peter Anvin @ 2025-02-22 20:54 UTC (permalink / raw)
To: Kent Overstreet, Linus Torvalds
Cc: Ventura Jack, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, ksummit, linux-kernel, miguel.ojeda.sandonis,
rust-for-linux
On February 22, 2025 12:00:04 PM PST, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>On Sat, Feb 22, 2025 at 11:18:33AM -0800, Linus Torvalds wrote:
>> On Sat, 22 Feb 2025 at 10:54, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>> >
>> > If that work is successful it could lead to significant improvements in
>> > code generation, since aliasing causes a lot of unnecessary spills and
>> > reloads - VLIW could finally become practical.
>>
>> No.
>>
>> Compiler people think aliasing matters. It very seldom does. And VLIW
>> will never become practical for entirely unrelated reasons (read: OoO
>> is fundamentally superior to VLIW in general purpose computing).
>
>OoO and VLIW are orthogonal, not exclusive, and we always want to go
>wider, if we can. Separately, neverending gift that is Spectre should be
>making everyone reconsider how reliant we've become on OoO.
>
>We'll never get rid of OoO, I agree on that point. But I think it's
>worth some thought experiments about how many branches actually need to
>be there vs. how many are there because everyone's assumed "branches are
>cheap! (so it's totally fine if the CPU sucks at the alternatives)" on
>both the hardware and software side.
>
>e.g. cmov historically sucked (and may still, I don't know), but a _lot_
>of branches should just be dumb ALU ops. I wince at a lot of the
>assembly I see gcc generate for e.g. short multiword integer
>comparisons, there are a ton of places where it'll emit 3 or 5 branches
>where 1 is all you need if we had better ALU primitives.
>
>> Aliasing is one of those bug-bears where compiler people can make
>> trivial code optimizations that look really impressive. So compiler
>> people *love* having simplistic aliasing rules that don't require real
>> analysis, because the real analysis is hard (not just expensive, but
>> basically unsolvable).
>
>I don't think crazy compiler experiments from crazy C people have much
>relevance, here. I'm talking about if/when Rust is able to get this
>right.
>
>> The C standards body has been much too eager to embrace "undefined behavior".
>
>Agree on C, but for the rest I think you're just failing to imagine what
>we could have if everything wasn't tied to a language with
>broken/missing semantics w.r.t. aliasing.
>
>Yes, C will never get a memory model that gets rid of the spills and
>reloads. But Rust just might. It's got the right model at the reference
>level, we just need to see if they can push that down to raw pointers in
>unsafe code.
>
>But consider what the world would look like if Rust fixes aliasing and
>we get a microarchitecture that's able to take advantage of it. Do a
>microarchitecture that focuses some on ALU ops to get rid of as many
>branches as possible (e.g. min/max, all your range checks that don't
>trap), get rid of loads and spills from aliasing so you're primarily
>running out of registers - and now you _do_ have enough instructions in
>a basic block, with fixed latency, that you can schedule at compile time
>to make VLIW worth it.
>
>I don't think it's that big of a leap. Lack of cooperation between
>hardware and compiler folks (and the fact that what the hardware people
>wanted was impossible at the time) was what killed Itanium, so if you
>fix those two things...
>
>> The kernel basically turns all that off, as much as possible. Overflow
>> isn't undefined in the kernel. Aliasing isn't undefined in the kernel.
>> Things like that.
>
>Yeah, the religion of undefined behaviour in C has been an absolute
>nightmare.
>
>It's not just the compiler folks though, that way of thinking has
>infected entirely too many people people in kernel and userspace -
>"performance is the holy grail and all that matters and thou shalt shave
>every single damn instruction".
>
>Where this really comes up for me is assertions, because we're not
>giving great guidance there. It's always better to hit an assertion than
>walk off into undefined behaviour la la land, but people see "thou shalt
>not crash the kernel" as a reason not to use BUG_ON() when it _should_
>just mean "always handle the error if you can't prove that it can't
>happen".
>
>> When 'integer overflow' means that you can _sometimes_ remove one
>> single ALU operation in *some* loops, but the cost of it is that you
>> potentially introduced some seriously subtle security bugs, I think we
>> know it was the wrong thing to do.
>
>And those branches just _do not matter_ in practice, since if one side
>leads to a trap they're perfectly predicted and to a first approximation
>we're always bottlenecked on memory.
>
VLIW and OoO might seem orthogonal, but they aren't – because they are trying to solve the same problem, combining them either means the OoO engine can't do a very good job because of false dependencies (if you are scheduling molecules) or you have to break them instructions down into atoms, at which point it is just a (often quite inefficient) RISC encoding. In short, VLIW *might* make sense when you are statically scheduling a known pipeline, but it is basically a dead end for evolution – so unless you can JIT your code for each new chip generation...
But OoO still is more powerful, because it can do *dynamic* scheduling. A cache miss doesn't necessarily mean that you have to stop the entire machine, for example.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 20:54 ` H. Peter Anvin
@ 2025-02-22 21:22 ` Kent Overstreet
2025-02-22 21:46 ` Linus Torvalds
` (2 more replies)
2025-02-22 21:22 ` Linus Torvalds
1 sibling, 3 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-22 21:22 UTC (permalink / raw)
To: H. Peter Anvin
Cc: Linus Torvalds, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sat, Feb 22, 2025 at 12:54:31PM -0800, H. Peter Anvin wrote:
> VLIW and OoO might seem orthogonal, but they aren't – because they are
> trying to solve the same problem, combining them either means the OoO
> engine can't do a very good job because of false dependencies (if you
> are scheduling molecules) or you have to break them instructions down
> into atoms, at which point it is just a (often quite inefficient) RISC
> encoding. In short, VLIW *might* make sense when you are statically
> scheduling a known pipeline, but it is basically a dead end for
> evolution – so unless you can JIT your code for each new chip
> generation...
JITing for each chip generation would be a part of any serious new VLIW
effort. It's plenty doable in the open source world and the gains are
too big to ignore.
> But OoO still is more powerful, because it can do *dynamic*
> scheduling. A cache miss doesn't necessarily mean that you have to
> stop the entire machine, for example.
Power hungry and prone to information leaks, though.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 20:54 ` H. Peter Anvin
2025-02-22 21:22 ` Kent Overstreet
@ 2025-02-22 21:22 ` Linus Torvalds
1 sibling, 0 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-22 21:22 UTC (permalink / raw)
To: H. Peter Anvin
Cc: Kent Overstreet, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sat, 22 Feb 2025 at 12:54, H. Peter Anvin <hpa@zytor.com> wrote:
>
> VLIW and OoO might seem orthogonal, but they aren't – because they are
> trying to solve the same problem, combining them either means the OoO
> engine can't do a very good job because of false dependencies (if you
> are scheduling molecules) or you have to break them instructions down
> into atoms, at which point it is just a (often quite inefficient) RISC
> encoding.
Exactly. Either you end up tracking things at bundle boundaries - and
screwing up your OoO - or you end up tracking things as individual
ops, and then all the VLIW advantages go away (but the disadvantages
remain).
The only reason to combine OoO and VLIW is because you started out
with a bad VLIW design (*cough*itanium*cough*) and it turned into a
huge commercial success (oh, not itanium after all, lol), and now you
need to improve performance while keeping backwards compatibility.
So at that point you make it OoO to make it viable, and the VLIW side
remains as a bad historical encoding / semantic footnote.
> In short, VLIW *might* make sense when you are statically
> scheduling a known pipeline, but it is basically a dead end for
> evolution – so unless you can JIT your code for each new chip
> generation...
.. which is how GPUs do it, of course. So in specialized environments,
VLIW works fine.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 21:22 ` Kent Overstreet
@ 2025-02-22 21:46 ` Linus Torvalds
2025-02-22 22:34 ` Kent Overstreet
2025-02-22 22:12 ` David Laight
2025-02-22 23:50 ` H. Peter Anvin
2 siblings, 1 reply; 194+ messages in thread
From: Linus Torvalds @ 2025-02-22 21:46 UTC (permalink / raw)
To: Kent Overstreet
Cc: H. Peter Anvin, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sat, 22 Feb 2025 at 13:22, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>
> Power hungry and prone to information leaks, though.
The power argument is bogus.
The fact is, high performance is <i>always</i> "inefficient". Anybody
who doesn't understand that doesn't understand reality.
And I very much say "reality". Because it has nothing to do with CPU
design, and everything to do with "that is how reality is".
Look at biology. Look at absolutely <i>any</i> other area of
technology. Are you a car nut? Performance cars are not efficient.
Efficiency comes at a very real cost in performance. It's basically a
fundamental rule of entropy, but if you want to call it anything else,
you can attribute it to me.
Being a high-performance warm-blooded mammal takes a lot of energy,
but only a complete nincompoop then takes that as a negative. You'd be
*ignorant* and stupid to make that argument.
But somehow when it comes to technology, people _do_ make that
argument, and other people take those clowns seriously. It boggles the
mind.
Being a snake is a _hell_ of a lot more "efficient". You might only
need to eat once a month. But you have to face the reality that that
particular form of efficiency comes at a very real cost, and saying
that being "cold-blooded" is more efficient than being a warm-blooded
mammal is in many ways a complete lie and is distorting the truth.
It's only more efficient within the narrow band where it works, and
only if you are willing to take the very real costs that come with it.
If you need performance in the general case, it's not at all more
efficient any more: it's dead.
Yes, good OoO takes power. But I claim - and history backs me up -
that it does so by outperforming the alternatives.
The people who try to claim anything else are deluded and wrong, and
are making arguments based on fever dreams and hopes and rose-tinted
glasses.
It wasn't all that long ago that the ARM people claimed that their
in-order cores were better because they were lower power and more
efficient. Guess what? When they needed higher performance, those
delusions stopped, and they don't make those stupid and ignorant
arguments any more. They still try to mumble about "little" cores, but
if you look at the undisputed industry leader in ARM cores (hint: it
starts with an 'A' and sounds like a fruit), even the "little" cores
are OoO.
The VLIW people have proclaimed the same efficiency advantages for
decades. I know. I was there (with Peter ;), and we tried. We were
very very wrong.
At some point you just have to face reality.
The vogue thing now is to talk about explicit parallelism, and just
taking lots of those lower-performance (but thus more "efficient" -
not really: they are just targeting a different performance envelope)
cores perform as well as OoO cores.
And that's _lovely_ if your load is actually that parallel and you
don't need a power-hungry cross-bar to make them all communicate very
closely.
So if you're a GPU - or, as we call them now: AI accelerators - you'd
be stupid to do anything else.
Don't believe the VLIW hype. It's literally the snake of the CPU
world: it can be great in particular niches, but it's not some "answer
to efficiency". Keep it in your DSP's, and make your GPU's use a
metric shit-load of them, but don't think that being good at one thing
makes you somehow the solution in the general purpose computing model.
It's not like VLIW hasn't been around for many decades. And there's a
reason you don't see it in GP CPUs.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 21:22 ` Kent Overstreet
2025-02-22 21:46 ` Linus Torvalds
@ 2025-02-22 22:12 ` David Laight
2025-02-22 22:46 ` Kent Overstreet
2025-02-22 23:50 ` H. Peter Anvin
2 siblings, 1 reply; 194+ messages in thread
From: David Laight @ 2025-02-22 22:12 UTC (permalink / raw)
To: Kent Overstreet
Cc: H. Peter Anvin, Linus Torvalds, Ventura Jack, Gary Guo, airlied,
boqun.feng, ej, gregkh, hch, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sat, 22 Feb 2025 16:22:08 -0500
Kent Overstreet <kent.overstreet@linux.dev> wrote:
> On Sat, Feb 22, 2025 at 12:54:31PM -0800, H. Peter Anvin wrote:
> > VLIW and OoO might seem orthogonal, but they aren't – because they are
> > trying to solve the same problem, combining them either means the OoO
> > engine can't do a very good job because of false dependencies (if you
> > are scheduling molecules) or you have to break them instructions down
> > into atoms, at which point it is just a (often quite inefficient) RISC
> > encoding. In short, VLIW *might* make sense when you are statically
> > scheduling a known pipeline, but it is basically a dead end for
> > evolution – so unless you can JIT your code for each new chip
> > generation...
>
> JITing for each chip generation would be a part of any serious new VLIW
> effort. It's plenty doable in the open source world and the gains are
> too big to ignore.
Doesn't most code get 'dumbed down' to whatever 'normal' ABI compilers
can easily handle.
A few hot loops might get optimised, but most code won't be.
Of course AI/GPU code is going to spend a lot of time in some tight loops.
But no one is going to go through the TCP stack and optimise the source
so that a compiler can make a better job of it for 'this years' cpu.
For various reasons ended up writing a simple 32bit cpu last year (in VHDL for an fgpa).
The ALU is easy - just a big MUX.
The difficulty is feeding the result of one instruction into the next.
Normal code needs to do that all the time, you can't afford a stall
(never mind the 3 clocks writing to/from the register 'memory' would take).
In fact the ALU dependencies [1] ended up being slower than the instruction fetch
code, so I managed to take predicted and unconditional branches without a stall.
So no point having the 'branch delay slot' of sparc32.
[1] multiply was the issue, even with a pipeline stall if the result has needed.
In any case it only had to run at 62.5MHz (related to the PCIe speed).
Was definitely an interesting exercise.
David
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 21:46 ` Linus Torvalds
@ 2025-02-22 22:34 ` Kent Overstreet
2025-02-22 23:56 ` Jan Engelhardt
0 siblings, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-22 22:34 UTC (permalink / raw)
To: Linus Torvalds
Cc: H. Peter Anvin, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sat, Feb 22, 2025 at 01:46:33PM -0800, Linus Torvalds wrote:
> On Sat, 22 Feb 2025 at 13:22, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> >
> > Power hungry and prone to information leaks, though.
>
> The power argument is bogus.
>
> The fact is, high performance is <i>always</i> "inefficient". Anybody
> who doesn't understand that doesn't understand reality.
It depends entirely on what variable you're constrained on. When you're
trying to maximize power density, you probably will be inefficient
because that's where the easy tradeoffs are. E.g. switching from aerobic
respiration to anaerobic, or afterburners.
But if you're already maxxed out power density, then your limiting
factor is your ability to reject heat. High power electric moters aren't
inefficient for the simple reason that if they were, they'd melt. RC
helicopter motors hit power densities of 5-10 kw/kg, with only air
cooling, so either they're 95%+ efficient or they're a puddle of molten
copper.
CPUs are significatly more in the second category than the first - we're
capped on power in most applications and transistors aren't going to get
meaningfully more efficient barring something radical happening.
> The VLIW people have proclaimed the same efficiency advantages for
> decades. I know. I was there (with Peter ;), and we tried. We were
> very very wrong.
If we ever get a chance I want to hear stories :)
> The vogue thing now is to talk about explicit parallelism, and just
> taking lots of those lower-performance (but thus more "efficient" -
> not really: they are just targeting a different performance envelope)
> cores perform as well as OoO cores.
Those are not terribly interesting to me. Useful to some people, sure,
but any idiot can add more and more cores (and leave it to someone else
to deal with Amdahl's law). I actually do care about straight line
performance...
> It's not like VLIW hasn't been around for many decades. And there's a
> reason you don't see it in GP CPUs.
It's also been the case more than once in technology that ideas appeared
and were initially rejected, and it took decades for the other pieces to
come together to make them practical. Especially when those ideas were
complex when they were first come up with - Multics, functional
programming (or Algol 68 even bofer that).
That's especially the case when one area has been stagnet for awhile. We
were stuck on x86 for a long time, and now we've got ARM which still
isn't _that_ different from x86. But now it's getting easier to design
and fab new CPUs, and the software side of things has gotten way easier,
so I'm curious to see what's coming over the next 10-20 years.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 22:12 ` David Laight
@ 2025-02-22 22:46 ` Kent Overstreet
0 siblings, 0 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-22 22:46 UTC (permalink / raw)
To: David Laight
Cc: H. Peter Anvin, Linus Torvalds, Ventura Jack, Gary Guo, airlied,
boqun.feng, ej, gregkh, hch, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sat, Feb 22, 2025 at 10:12:48PM +0000, David Laight wrote:
> On Sat, 22 Feb 2025 16:22:08 -0500
> Kent Overstreet <kent.overstreet@linux.dev> wrote:
>
> > On Sat, Feb 22, 2025 at 12:54:31PM -0800, H. Peter Anvin wrote:
> > > VLIW and OoO might seem orthogonal, but they aren't – because they are
> > > trying to solve the same problem, combining them either means the OoO
> > > engine can't do a very good job because of false dependencies (if you
> > > are scheduling molecules) or you have to break them instructions down
> > > into atoms, at which point it is just a (often quite inefficient) RISC
> > > encoding. In short, VLIW *might* make sense when you are statically
> > > scheduling a known pipeline, but it is basically a dead end for
> > > evolution – so unless you can JIT your code for each new chip
> > > generation...
> >
> > JITing for each chip generation would be a part of any serious new VLIW
> > effort. It's plenty doable in the open source world and the gains are
> > too big to ignore.
>
> Doesn't most code get 'dumbed down' to whatever 'normal' ABI compilers
> can easily handle.
> A few hot loops might get optimised, but most code won't be.
> Of course AI/GPU code is going to spend a lot of time in some tight loops.
> But no one is going to go through the TCP stack and optimise the source
> so that a compiler can make a better job of it for 'this years' cpu.
We're not actually talking about the normal sort of JIT, nothing profile
guided and no dynamic recompilation - just specialization based on the
exact microarchitecture you're running on.
You'd probably do it by deferring the last stage of compilation and
plugging it into the dynamic linker with an on disk cache - so it can
work with the LLVM toolchain and all the languages that target it.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 21:22 ` Kent Overstreet
2025-02-22 21:46 ` Linus Torvalds
2025-02-22 22:12 ` David Laight
@ 2025-02-22 23:50 ` H. Peter Anvin
2025-02-23 0:06 ` Kent Overstreet
2 siblings, 1 reply; 194+ messages in thread
From: H. Peter Anvin @ 2025-02-22 23:50 UTC (permalink / raw)
To: Kent Overstreet
Cc: Linus Torvalds, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On February 22, 2025 1:22:08 PM PST, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>On Sat, Feb 22, 2025 at 12:54:31PM -0800, H. Peter Anvin wrote:
>> VLIW and OoO might seem orthogonal, but they aren't – because they are
>> trying to solve the same problem, combining them either means the OoO
>> engine can't do a very good job because of false dependencies (if you
>> are scheduling molecules) or you have to break them instructions down
>> into atoms, at which point it is just a (often quite inefficient) RISC
>> encoding. In short, VLIW *might* make sense when you are statically
>> scheduling a known pipeline, but it is basically a dead end for
>> evolution – so unless you can JIT your code for each new chip
>> generation...
>
>JITing for each chip generation would be a part of any serious new VLIW
>effort. It's plenty doable in the open source world and the gains are
>too big to ignore.
>
>> But OoO still is more powerful, because it can do *dynamic*
>> scheduling. A cache miss doesn't necessarily mean that you have to
>> stop the entire machine, for example.
>
>Power hungry and prone to information leaks, though.
>
I think I know a thing or two about JITting for VLIW.. and so does someone else in this thread ;)
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 22:34 ` Kent Overstreet
@ 2025-02-22 23:56 ` Jan Engelhardt
0 siblings, 0 replies; 194+ messages in thread
From: Jan Engelhardt @ 2025-02-22 23:56 UTC (permalink / raw)
To: Kent Overstreet
Cc: Linus Torvalds, H. Peter Anvin, Ventura Jack, Gary Guo, airlied,
boqun.feng, david.laight.linux, gregkh, hch, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On Saturday 2025-02-22 23:34, Kent Overstreet wrote:
>
>> The VLIW people have proclaimed the same efficiency advantages for
>> decades. I know. I was there (with Peter ;), and we tried. We were
>> very very wrong.
>
>If we ever get a chance I want to hear stories :)
The story is probably about Transmeta CPUs. The TM5x00 has some VLIW
design, and for "backwards compatibility" has microcode to translate
x86 asm into its internal representation (sounds like what every OOO
CPU having microops is doing these days).
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 23:50 ` H. Peter Anvin
@ 2025-02-23 0:06 ` Kent Overstreet
0 siblings, 0 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-23 0:06 UTC (permalink / raw)
To: H. Peter Anvin
Cc: Linus Torvalds, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sat, Feb 22, 2025 at 03:50:59PM -0800, H. Peter Anvin wrote:
> On February 22, 2025 1:22:08 PM PST, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> >On Sat, Feb 22, 2025 at 12:54:31PM -0800, H. Peter Anvin wrote:
> >> VLIW and OoO might seem orthogonal, but they aren't – because they are
> >> trying to solve the same problem, combining them either means the OoO
> >> engine can't do a very good job because of false dependencies (if you
> >> are scheduling molecules) or you have to break them instructions down
> >> into atoms, at which point it is just a (often quite inefficient) RISC
> >> encoding. In short, VLIW *might* make sense when you are statically
> >> scheduling a known pipeline, but it is basically a dead end for
> >> evolution – so unless you can JIT your code for each new chip
> >> generation...
> >
> >JITing for each chip generation would be a part of any serious new VLIW
> >effort. It's plenty doable in the open source world and the gains are
> >too big to ignore.
> >
> >> But OoO still is more powerful, because it can do *dynamic*
> >> scheduling. A cache miss doesn't necessarily mean that you have to
> >> stop the entire machine, for example.
> >
> >Power hungry and prone to information leaks, though.
> >
>
> I think I know a thing or two about JITting for VLIW.. and so does someone else in this thread ;)
Yeah, you guys going to share? :)
The Transmeta experience does seem entirely relevant, but it's hard to
tell if you and Linus are down on it because of any particular insights
into VLIW, or because that was a bad time to be going up against Intel.
And the "unrestricted pointer aliasing" issues would've directly
affected you, recompiling x86 machine code, so if anyone's seen numbers
on that it's you guys.
But it was always known (at least by the Itanium guys) that for VLIW to
work it'd need help from the compiler guys, and when you're recompiling
machine code that's right out. But then you might've had some fun jit
tricks to make up for that...
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 19:18 ` Linus Torvalds
2025-02-22 20:00 ` Kent Overstreet
@ 2025-02-23 15:30 ` Ventura Jack
2025-02-23 16:28 ` David Laight
` (3 more replies)
1 sibling, 4 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-23 15:30 UTC (permalink / raw)
To: Linus Torvalds
Cc: Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
Just to be clear and avoid confusion, I would
like to clarify some aspects of aliasing.
In case that you do not already know about this,
I suspect that you may find it very valuable.
I am not an expert at Rust, so for any Rust experts
out there, please feel free to point out any errors
or mistakes that I make in the following.
The Rustonomicon is (as I gather) the semi-official
documentation site for unsafe Rust.
Aliasing in C and Rust:
C "strict aliasing":
- Is not a keyword.
- Based on "type compatibility".
- Is turned off by default in the kernel by using
a compiler flag.
C "restrict":
- Is a keyword, applied to pointers.
- Is opt-in to a kind of aliasing.
- Is seldom used in practice, since many find
it difficult to use correctly and avoid
undefined behavior.
Rust aliasing:
- Is not a keyword.
- Applies to certain pointer kinds in Rust, namely
Rust "references".
Rust pointer kinds:
https://doc.rust-lang.org/reference/types/pointer.html
- Aliasing in Rust is not opt-in or opt-out,
it is always on.
https://doc.rust-lang.org/nomicon/aliasing.html
- Rust has not defined its aliasing model.
https://doc.rust-lang.org/nomicon/references.html
"Unfortunately, Rust hasn't actually
defined its aliasing model.
While we wait for the Rust devs to specify
the semantics of their language, let's use
the next section to discuss what aliasing is
in general, and why it matters."
There is active experimental research on
defining the aliasing model, including tree borrows
and stacked borrows.
- The aliasing model not being defined makes
it harder to reason about and work with
unsafe Rust, and therefore harder to avoid
undefined behavior/memory safety bugs.
- Rust "references" are common and widespread.
- If the aliasing rules are broken, undefined
behavior and lack of memory safety can
happen.
- In safe Rust, if aliasing rules are broken,
depending on which types and functions
are used, a compile-time error or UB-safe runtime
error occurs. For instance, RefCell.borrow_mut()
can panic if used incorrectly. If all the unsafe Rust
code and any safe Rust code the unsafe Rust
code relies on is implemented correctly, there is
no risk of undefined behavior/memory safety bugs
when working in safe Rust.
With a few caveats that I ignore here, like type
system holes allowing UB in safe Rust, and no
stack overflow protection if #![no_std] is used.
Rust for Linux uses #![no_std].
- The correctness of unsafe Rust code can rely on
safe Rust code being correct.
https://doc.rust-lang.org/nomicon/working-with-unsafe.html
"Because it relies on invariants of a struct field,
this unsafe code does more than pollute a whole
function: it pollutes a whole module. Generally,
the only bullet-proof way to limit the scope of
unsafe code is at the module boundary with privacy."
- In unsafe Rust, it is the programmer's responsibility
to obey the aliasing rules, though the type system
can offer limited help.
- The aliasing rules in Rust are possibly as hard or
harder than for C "restrict", and it is not possible to
opt out of aliasing in Rust, which is cited by some
as one of the reasons for unsafe Rust being
harder than C.
- It is necessary to have some understanding of the
aliasing rules for Rust in order to work with
unsafe Rust in general.
- Many find unsafe Rust harder than C:
https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
https://lucumr.pocoo.org/2022/1/30/unsafe-rust/
https://youtube.com/watch?v=DG-VLezRkYQ
Unsafe Rust being harder than C and C++ is a common
sentiment in the Rust community, possibly the large
majority view.
- Some Rust developers, instead of trying to understand
the aliasing rules, may try to rely on MIRI. MIRI is
similar to a sanitizer for C, with similar advantages and
disadvantages. MIRI uses both the stacked borrow
and the tree borrow experimental research models.
MIRI, like sanitizers, does not catch everything, though
MIRI has been used to find undefined behavior/memory
safety bugs in for instance the Rust standard library.
So if you do not wish to deal with aliasing rules, you
may need to avoid the pieces of code that contains unsafe
Rust.
Best, VJ.
On Sat, Feb 22, 2025 at 12:18 PM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> On Sat, 22 Feb 2025 at 10:54, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> >
> > If that work is successful it could lead to significant improvements in
> > code generation, since aliasing causes a lot of unnecessary spills and
> > reloads - VLIW could finally become practical.
>
> No.
>
> Compiler people think aliasing matters. It very seldom does. And VLIW
> will never become practical for entirely unrelated reasons (read: OoO
> is fundamentally superior to VLIW in general purpose computing).
>
> Aliasing is one of those bug-bears where compiler people can make
> trivial code optimizations that look really impressive. So compiler
> people *love* having simplistic aliasing rules that don't require real
> analysis, because the real analysis is hard (not just expensive, but
> basically unsolvable).
>
> And they matter mainly on bad CPUs and HPC-style loads, or on trivial
> example code. And for vectorization.
>
> And the sane model for those was to just have the HPC people say what
> the aliasing rules were (ie the C "restrict" keyword), but because it
> turns out that nobody wants to use that, and because one of the main
> targets was HPC where there was a very clear type distinction between
> integer indexes and floating point arrays, some "clever" person
> thought "why don't we use that obvious distinction to say that things
> don't alias". Because then you didn't have to add "restrict" modifiers
> to your compiler benchmarks, you could just use the existing syntax
> ("double *").
>
> And so they made everything worse for everybody else, because it made
> C HPC code run as fast as the old Fortran code, and the people who
> cared about DGEMM and BLAS were happy. And since that was how you
> defined supercomputer speeds (before AI), that largely pointless
> benchmark was a BigDeal(tm).
>
> End result: if you actually care about HPC and vectorization, just use
> 'restrict'. If you want to make it better (because 'restrict'
> certainly isn't perfect either), extend on the concept. Don't make
> things worse for everybody else by introducing stupid language rules
> that are fundamentally based on "the compiler can generate code better
> by relying on undefined behavior".
>
> The C standards body has been much too eager to embrace "undefined behavior".
>
> In original C, it was almost entirely about either hardware
> implementation issues or about "you got your pointer arithetic wrong,
> and the source code is undefined, so the result is undefined".
> Together with some (very unfortunate) order of operations and sequence
> point issues.
>
> But instead of trying to tighten that up (which *has* happened: the
> sequence point rules _have_ actually become better!) and turning the
> language into a more reliable one by making for _fewer_ undefined or
> platform-defined things, many C language features have been about
> extending on the list of undefined behaviors.
>
> The kernel basically turns all that off, as much as possible. Overflow
> isn't undefined in the kernel. Aliasing isn't undefined in the kernel.
> Things like that.
>
> And making the rules stricter makes almost no difference for code
> generation in practice. Really. The arguments for the garbage that is
> integer overflow or 'strict aliasing' in C were always just wrong.
>
> When 'integer overflow' means that you can _sometimes_ remove one
> single ALU operation in *some* loops, but the cost of it is that you
> potentially introduced some seriously subtle security bugs, I think we
> know it was the wrong thing to do.
>
> Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-23 15:30 ` Ventura Jack
@ 2025-02-23 16:28 ` David Laight
2025-02-24 0:27 ` Gary Guo
` (2 subsequent siblings)
3 siblings, 0 replies; 194+ messages in thread
From: David Laight @ 2025-02-23 16:28 UTC (permalink / raw)
To: Ventura Jack
Cc: Linus Torvalds, Kent Overstreet, Gary Guo, airlied, boqun.feng,
ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sun, 23 Feb 2025 08:30:06 -0700
Ventura Jack <venturajack85@gmail.com> wrote:
> Just to be clear and avoid confusion, I would
> like to clarify some aspects of aliasing.
> In case that you do not already know about this,
> I suspect that you may find it very valuable.
>
> I am not an expert at Rust, so for any Rust experts
> out there, please feel free to point out any errors
> or mistakes that I make in the following.
>
> The Rustonomicon is (as I gather) the semi-official
> documentation site for unsafe Rust.
>
> Aliasing in C and Rust:
>
> C "strict aliasing":
> - Is not a keyword.
> - Based on "type compatibility".
> - Is turned off by default in the kernel by using a compiler flag.
My understanding is that 'strict aliasing' means that the compiler can
assume that variables of different types do not occupy the same memory.
The exception is that all single byte accesses can alias any other
data (unless the compiler can prove otherwise [1]).
The kernel sets no-strict-aliasing to get the historic behaviour where
the compiler has to assume that any two memory accesses can overlap.
Consider an inlined memcpy() copying a structure containing (say) double.
If it uses char copies all is fine.
If it uses int copies the compiler can re-order the 'int' accesses w.r.t
the 'double' ones (and can entirely optimise away some writes).
This is just plain broken.
You also get the reverse problem trying to populate byte sized fields in
one structure from another, the accesses don't get interleaved because
the writes have to be assumed to be writing into the source structure.
I've tried using int:8 - doesn't help.
"restrict" might help, but I remember something about it not working
when a function is inlined - it is also the most stupid name ever.
[1] I have some code where there are two static arrays that get
indexed by the same value (they are separated by the linker).
If you do:
b = a->b;
the compiler assumes that a and b might alias each other.
OTOH take the 'hit' of the array multiply and do:
b = &static_b[a->b_index];
and it knows they are separate.
(In my case it might know that 'a' is also static data.)
But there is no way to tell the compiler that 'a' and 'b' don't overlap.
David
>
> C "restrict":
> - Is a keyword, applied to pointers.
> - Is opt-in to a kind of aliasing.
> - Is seldom used in practice, since many find
> it difficult to use correctly and avoid
> undefined behavior.
>
> Rust aliasing:
> - Is not a keyword.
> - Applies to certain pointer kinds in Rust, namely
> Rust "references".
> Rust pointer kinds:
> https://doc.rust-lang.org/reference/types/pointer.html
> - Aliasing in Rust is not opt-in or opt-out,
> it is always on.
> https://doc.rust-lang.org/nomicon/aliasing.html
> - Rust has not defined its aliasing model.
> https://doc.rust-lang.org/nomicon/references.html
> "Unfortunately, Rust hasn't actually
> defined its aliasing model.
> While we wait for the Rust devs to specify
> the semantics of their language, let's use
> the next section to discuss what aliasing is
> in general, and why it matters."
> There is active experimental research on
> defining the aliasing model, including tree borrows
> and stacked borrows.
> - The aliasing model not being defined makes
> it harder to reason about and work with
> unsafe Rust, and therefore harder to avoid
> undefined behavior/memory safety bugs.
> - Rust "references" are common and widespread.
> - If the aliasing rules are broken, undefined
> behavior and lack of memory safety can
> happen.
> - In safe Rust, if aliasing rules are broken,
> depending on which types and functions
> are used, a compile-time error or UB-safe runtime
> error occurs. For instance, RefCell.borrow_mut()
> can panic if used incorrectly. If all the unsafe Rust
> code and any safe Rust code the unsafe Rust
> code relies on is implemented correctly, there is
> no risk of undefined behavior/memory safety bugs
> when working in safe Rust.
>
> With a few caveats that I ignore here, like type
> system holes allowing UB in safe Rust, and no
> stack overflow protection if #![no_std] is used.
> Rust for Linux uses #![no_std].
> - The correctness of unsafe Rust code can rely on
> safe Rust code being correct.
> https://doc.rust-lang.org/nomicon/working-with-unsafe.html
> "Because it relies on invariants of a struct field,
> this unsafe code does more than pollute a whole
> function: it pollutes a whole module. Generally,
> the only bullet-proof way to limit the scope of
> unsafe code is at the module boundary with privacy."
> - In unsafe Rust, it is the programmer's responsibility
> to obey the aliasing rules, though the type system
> can offer limited help.
> - The aliasing rules in Rust are possibly as hard or
> harder than for C "restrict", and it is not possible to
> opt out of aliasing in Rust, which is cited by some
> as one of the reasons for unsafe Rust being
> harder than C.
> - It is necessary to have some understanding of the
> aliasing rules for Rust in order to work with
> unsafe Rust in general.
> - Many find unsafe Rust harder than C:
> https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
> https://lucumr.pocoo.org/2022/1/30/unsafe-rust/
> https://youtube.com/watch?v=DG-VLezRkYQ
> Unsafe Rust being harder than C and C++ is a common
> sentiment in the Rust community, possibly the large
> majority view.
> - Some Rust developers, instead of trying to understand
> the aliasing rules, may try to rely on MIRI. MIRI is
> similar to a sanitizer for C, with similar advantages and
> disadvantages. MIRI uses both the stacked borrow
> and the tree borrow experimental research models.
> MIRI, like sanitizers, does not catch everything, though
> MIRI has been used to find undefined behavior/memory
> safety bugs in for instance the Rust standard library.
>
> So if you do not wish to deal with aliasing rules, you
> may need to avoid the pieces of code that contains unsafe
> Rust.
>
> Best, VJ.
>
> On Sat, Feb 22, 2025 at 12:18 PM Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> >
> > On Sat, 22 Feb 2025 at 10:54, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> > >
> > > If that work is successful it could lead to significant improvements in
> > > code generation, since aliasing causes a lot of unnecessary spills and
> > > reloads - VLIW could finally become practical.
> >
> > No.
> >
> > Compiler people think aliasing matters. It very seldom does. And VLIW
> > will never become practical for entirely unrelated reasons (read: OoO
> > is fundamentally superior to VLIW in general purpose computing).
> >
> > Aliasing is one of those bug-bears where compiler people can make
> > trivial code optimizations that look really impressive. So compiler
> > people *love* having simplistic aliasing rules that don't require real
> > analysis, because the real analysis is hard (not just expensive, but
> > basically unsolvable).
> >
> > And they matter mainly on bad CPUs and HPC-style loads, or on trivial
> > example code. And for vectorization.
> >
> > And the sane model for those was to just have the HPC people say what
> > the aliasing rules were (ie the C "restrict" keyword), but because it
> > turns out that nobody wants to use that, and because one of the main
> > targets was HPC where there was a very clear type distinction between
> > integer indexes and floating point arrays, some "clever" person
> > thought "why don't we use that obvious distinction to say that things
> > don't alias". Because then you didn't have to add "restrict" modifiers
> > to your compiler benchmarks, you could just use the existing syntax
> > ("double *").
> >
> > And so they made everything worse for everybody else, because it made
> > C HPC code run as fast as the old Fortran code, and the people who
> > cared about DGEMM and BLAS were happy. And since that was how you
> > defined supercomputer speeds (before AI), that largely pointless
> > benchmark was a BigDeal(tm).
> >
> > End result: if you actually care about HPC and vectorization, just use
> > 'restrict'. If you want to make it better (because 'restrict'
> > certainly isn't perfect either), extend on the concept. Don't make
> > things worse for everybody else by introducing stupid language rules
> > that are fundamentally based on "the compiler can generate code better
> > by relying on undefined behavior".
> >
> > The C standards body has been much too eager to embrace "undefined behavior".
> >
> > In original C, it was almost entirely about either hardware
> > implementation issues or about "you got your pointer arithetic wrong,
> > and the source code is undefined, so the result is undefined".
> > Together with some (very unfortunate) order of operations and sequence
> > point issues.
> >
> > But instead of trying to tighten that up (which *has* happened: the
> > sequence point rules _have_ actually become better!) and turning the
> > language into a more reliable one by making for _fewer_ undefined or
> > platform-defined things, many C language features have been about
> > extending on the list of undefined behaviors.
> >
> > The kernel basically turns all that off, as much as possible. Overflow
> > isn't undefined in the kernel. Aliasing isn't undefined in the kernel.
> > Things like that.
> >
> > And making the rules stricter makes almost no difference for code
> > generation in practice. Really. The arguments for the garbage that is
> > integer overflow or 'strict aliasing' in C were always just wrong.
> >
> > When 'integer overflow' means that you can _sometimes_ remove one
> > single ALU operation in *some* loops, but the cost of it is that you
> > potentially introduced some seriously subtle security bugs, I think we
> > know it was the wrong thing to do.
> >
> > Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-23 15:30 ` Ventura Jack
2025-02-23 16:28 ` David Laight
@ 2025-02-24 0:27 ` Gary Guo
2025-02-24 9:57 ` Ventura Jack
2025-02-24 12:58 ` Theodore Ts'o
2025-02-25 16:12 ` Alice Ryhl
3 siblings, 1 reply; 194+ messages in thread
From: Gary Guo @ 2025-02-24 0:27 UTC (permalink / raw)
To: Ventura Jack
Cc: Linus Torvalds, Kent Overstreet, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sun, 23 Feb 2025 08:30:06 -0700
Ventura Jack <venturajack85@gmail.com> wrote:
> - In unsafe Rust, it is the programmer's responsibility
> to obey the aliasing rules, though the type system
> can offer limited help.
> - The aliasing rules in Rust are possibly as hard or
> harder than for C "restrict", and it is not possible to
> opt out of aliasing in Rust, which is cited by some
> as one of the reasons for unsafe Rust being
> harder than C.
The analogy is correct, you can more or less treat all Rust references
a `restrict` pointers. However it is possible to opt out, and it is
done at a per-type basis.
Rust provides `UnsafeCell` to make a immutable reference mutable (i.e.
"interior mutability"), and this makes `&UnsafeCell<T>` behaves like
`T*` in C.
There's another mechanism (currently under rework, though) that makes a
mutable reference behave like `T*` in C.
RfL provides a `Opaque` type that wraps these mechanisms so it
absolutely cancel out any assumptions that the compiler can make about
a pointer whatsoever. For extra peace of mind, this is used for all
data structure that we share with C.
This type granularity is very useful. It allows selective opt-out for
harder to reason stuff, while it allows the compiler (and programmers!)
to assume that, say, if you're dealing with an immutable sequence of
bytes, then calling an arbitrary function will not magically change
contents of it.
Best,
Gary
> - It is necessary to have some understanding of the
> aliasing rules for Rust in order to work with
> unsafe Rust in general.
> - Many find unsafe Rust harder than C:
> https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
> https://lucumr.pocoo.org/2022/1/30/unsafe-rust/
> https://youtube.com/watch?v=DG-VLezRkYQ
> Unsafe Rust being harder than C and C++ is a common
> sentiment in the Rust community, possibly the large
> majority view.
> - Some Rust developers, instead of trying to understand
> the aliasing rules, may try to rely on MIRI. MIRI is
> similar to a sanitizer for C, with similar advantages and
> disadvantages. MIRI uses both the stacked borrow
> and the tree borrow experimental research models.
> MIRI, like sanitizers, does not catch everything, though
> MIRI has been used to find undefined behavior/memory
> safety bugs in for instance the Rust standard library.
>
> So if you do not wish to deal with aliasing rules, you
> may need to avoid the pieces of code that contains unsafe
> Rust.
>
> Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 0:27 ` Gary Guo
@ 2025-02-24 9:57 ` Ventura Jack
2025-02-24 10:31 ` Benno Lossin
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-24 9:57 UTC (permalink / raw)
To: Gary Guo
Cc: Linus Torvalds, Kent Overstreet, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sun, Feb 23, 2025 at 5:27 PM Gary Guo <gary@garyguo.net> wrote:
>
> On Sun, 23 Feb 2025 08:30:06 -0700
> Ventura Jack <venturajack85@gmail.com> wrote:
>
> > - In unsafe Rust, it is the programmer's responsibility
> > to obey the aliasing rules, though the type system
> > can offer limited help.
> > - The aliasing rules in Rust are possibly as hard or
> > harder than for C "restrict", and it is not possible to
> > opt out of aliasing in Rust, which is cited by some
> > as one of the reasons for unsafe Rust being
> > harder than C.
>
> The analogy is correct, you can more or less treat all Rust references
> a `restrict` pointers. However it is possible to opt out, and it is
> done at a per-type basis.
>
> Rust provides `UnsafeCell` to make a immutable reference mutable (i.e.
> "interior mutability"), and this makes `&UnsafeCell<T>` behaves like
> `T*` in C.
>
> There's another mechanism (currently under rework, though) that makes a
> mutable reference behave like `T*` in C.
>
> RfL provides a `Opaque` type that wraps these mechanisms so it
> absolutely cancel out any assumptions that the compiler can make about
> a pointer whatsoever. For extra peace of mind, this is used for all
> data structure that we share with C.
>
> This type granularity is very useful. It allows selective opt-out for
> harder to reason stuff, while it allows the compiler (and programmers!)
> to assume that, say, if you're dealing with an immutable sequence of
> bytes, then calling an arbitrary function will not magically change
> contents of it.
>
> Best,
> Gary
In regards to `UnsafeCell`, I believe that you are correct in regards
to mutability. However, if I understand you correctly, and if I
am not mistaken, I believe that you are wrong about `UnsafeCell`
making it possible to opt-out of the aliasing rules. And thus that
`UnsafeCell` does not behave like `T*` in C.
Documentation for `UnsafeCell`:
https://doc.rust-lang.org/std/cell/struct.UnsafeCell.html
"Note that only the immutability guarantee for shared
references is affected by `UnsafeCell`. The uniqueness
guarantee for mutable references is unaffected. There is no
legal way to obtain aliasing `&mut`, not even with `UnsafeCell<T>`."
"Note that whilst mutating the contents of an `&UnsafeCell<T>`
(even while other `&UnsafeCell<T>` references alias the cell) is
ok (provided you enforce the above invariants some other way),
it is still undefined behavior to have multiple
`&mut UnsafeCell<T>` aliases."
The documentation for `UnsafeCell` is long, and also mentions
that the precise aliasing rules for Rust are somewhat in flux.
"The precise Rust aliasing rules are somewhat in flux, but the
main points are not contentious:"
In regards to the `Opaque` type, it looks a bit like a C++
"smart pointer" or wrapper type, if I am not mistaken.
Documentation and related links for `Opaque`:
https://rust.docs.kernel.org/kernel/types/struct.Opaque.html
https://rust.docs.kernel.org/src/kernel/types.rs.html#307-310
https://github.com/Rust-for-Linux/pinned-init
It uses `UnsafeCell`, Rust "pinning", and the Rust for Linux library
"pinned-init". "pinned-init" uses a number of experimental,
unstable and nightly features of Rust. Working with the library
implementation requires having a good understanding of unsafe
Rust and many advanced features of Rust.
`Opaque` looks interesting. Do you know if it will become a more
widely used abstraction outside the Linux kernel?
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 9:57 ` Ventura Jack
@ 2025-02-24 10:31 ` Benno Lossin
2025-02-24 12:21 ` Ventura Jack
0 siblings, 1 reply; 194+ messages in thread
From: Benno Lossin @ 2025-02-24 10:31 UTC (permalink / raw)
To: Ventura Jack, Gary Guo
Cc: Linus Torvalds, Kent Overstreet, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On 24.02.25 10:57, Ventura Jack wrote:
> On Sun, Feb 23, 2025 at 5:27 PM Gary Guo <gary@garyguo.net> wrote:
>>
>> On Sun, 23 Feb 2025 08:30:06 -0700
>> Ventura Jack <venturajack85@gmail.com> wrote:
>>
>>> - In unsafe Rust, it is the programmer's responsibility
>>> to obey the aliasing rules, though the type system
>>> can offer limited help.
>>> - The aliasing rules in Rust are possibly as hard or
>>> harder than for C "restrict", and it is not possible to
>>> opt out of aliasing in Rust, which is cited by some
>>> as one of the reasons for unsafe Rust being
>>> harder than C.
>>
>> The analogy is correct, you can more or less treat all Rust references
>> a `restrict` pointers. However it is possible to opt out, and it is
>> done at a per-type basis.
>>
>> Rust provides `UnsafeCell` to make a immutable reference mutable (i.e.
>> "interior mutability"), and this makes `&UnsafeCell<T>` behaves like
>> `T*` in C.
>>
>> There's another mechanism (currently under rework, though) that makes a
>> mutable reference behave like `T*` in C.
>>
>> RfL provides a `Opaque` type that wraps these mechanisms so it
>> absolutely cancel out any assumptions that the compiler can make about
>> a pointer whatsoever. For extra peace of mind, this is used for all
>> data structure that we share with C.
>>
>> This type granularity is very useful. It allows selective opt-out for
>> harder to reason stuff, while it allows the compiler (and programmers!)
>> to assume that, say, if you're dealing with an immutable sequence of
>> bytes, then calling an arbitrary function will not magically change
>> contents of it.
>>
>> Best,
>> Gary
>
> In regards to `UnsafeCell`, I believe that you are correct in regards
> to mutability. However, if I understand you correctly, and if I
> am not mistaken, I believe that you are wrong about `UnsafeCell`
> making it possible to opt-out of the aliasing rules. And thus that
> `UnsafeCell` does not behave like `T*` in C.
`UnsafeCell<T>` does not behave like `T*` in C, because it isn't a
pointer. Like Gary said, `&UnsafeCell<T>` behaves like `T*` in C, while
`&mut UnsafeCell<T>` does not. That is what you quote from the docs
below. (Those ampersands mark references in Rust, pointers that have
additional guarantees [1])
For disabling the uniqueness guarantee for `&mut`, we use an official
"hack" that the Rust language developers are working on replacing with
a better mechanism (this was also mentioned by Gary above).
[1]: https://doc.rust-lang.org/std/primitive.reference.html
> Documentation for `UnsafeCell`:
> https://doc.rust-lang.org/std/cell/struct.UnsafeCell.html
>
> "Note that only the immutability guarantee for shared
> references is affected by `UnsafeCell`. The uniqueness
> guarantee for mutable references is unaffected. There is no
> legal way to obtain aliasing `&mut`, not even with `UnsafeCell<T>`."
>
> "Note that whilst mutating the contents of an `&UnsafeCell<T>`
> (even while other `&UnsafeCell<T>` references alias the cell) is
> ok (provided you enforce the above invariants some other way),
> it is still undefined behavior to have multiple
> `&mut UnsafeCell<T>` aliases."
>
> The documentation for `UnsafeCell` is long, and also mentions
> that the precise aliasing rules for Rust are somewhat in flux.
>
> "The precise Rust aliasing rules are somewhat in flux, but the
> main points are not contentious:"
>
> In regards to the `Opaque` type, it looks a bit like a C++
> "smart pointer" or wrapper type, if I am not mistaken.
It is not a smart pointer, as it has nothing to do with allocating or
deallocating. But it is a wrapper type that just removes all aliasing
guarantees if it is placed behind a reference (be it immutable or
mutable).
> Documentation and related links for `Opaque`:
> https://rust.docs.kernel.org/kernel/types/struct.Opaque.html
> https://rust.docs.kernel.org/src/kernel/types.rs.html#307-310
> https://github.com/Rust-for-Linux/pinned-init
>
> It uses `UnsafeCell`, Rust "pinning", and the Rust for Linux library
> "pinned-init".
pinned-init is not specific to `Opaque` and not really relevant with
respect to discussing aliasing guarantees.
> "pinned-init" uses a number of experimental, unstable and nightly
> features of Rust.
This is wrong. It uses no unstable features when you look at the version
in-tree (at `rust/kernel/init.rs`). The user-space version uses a single
unstable feature: `allocator_api` for accessing the `AllocError` type
from the standard library. You can disable the `alloc` feature and use
it on a stable compiler as written in the readme.
> Working with the library implementation requires having a good
> understanding of unsafe Rust and many advanced features of Rust.
pinned-init was explicitly designed such that you *don't* have to write
unsafe code for initializing structures that require pinning from the
get-go (such as the kernel's mutex). Yes, at some point you need to use
`unsafe` (eg in the `Mutex::new` function), but that will only be
required in the abstraction.
I don't know which "advanced features of Rust" you are talking about,
since a user will only need to read the docs and then use one of the
`[try_][pin_]init!` macros to initialize their struct.
(If you have any suggestions for what to improve in the docs, please let
me know. Also if you think something isn't easy to understand also let
me know, then I might be able to improve it. Thanks!)
> `Opaque` looks interesting. Do you know if it will become a more
> widely used abstraction outside the Linux kernel?
Only in projects that do FFI with C/C++ (or other such languages).
Outside of that the `Opaque` type is rather useless, since it disables
normal guarantees and makes working with the inner type annoying.
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 10:31 ` Benno Lossin
@ 2025-02-24 12:21 ` Ventura Jack
2025-02-24 12:47 ` Benno Lossin
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-24 12:21 UTC (permalink / raw)
To: Benno Lossin
Cc: Gary Guo, Linus Torvalds, Kent Overstreet, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Mon, Feb 24, 2025 at 3:31 AM Benno Lossin <benno.lossin@proton.me> wrote:
>
> On 24.02.25 10:57, Ventura Jack wrote:
> >
> > In regards to `UnsafeCell`, I believe that you are correct in regards
> > to mutability. However, if I understand you correctly, and if I
> > am not mistaken, I believe that you are wrong about `UnsafeCell`
> > making it possible to opt-out of the aliasing rules. And thus that
> > `UnsafeCell` does not behave like `T*` in C.
>
> `UnsafeCell<T>` does not behave like `T*` in C, because it isn't a
> pointer. Like Gary said, `&UnsafeCell<T>` behaves like `T*` in C, while
> `&mut UnsafeCell<T>` does not. That is what you quote from the docs
> below. (Those ampersands mark references in Rust, pointers that have
> additional guarantees [1])
From what I can see in the documentation, `&UnsafeCell<T>` also does not
behave like `T*` in C. In C, especially if "strict aliasing" is turned
off in the
compiler, `T*` does not have aliasing requirements. You can have multiple
C `T*` pointers pointing to the same object, and mutate the same object.
The documentation for `UnsafeCell` conversely spends a lot of space
discussing invariants and aliasing requirements.
I do not understand why you claim:
"`&UnsafeCell<T>` behaves like `T*` in C,"
That statement is false as far as I can figure out, though I have taken it
out of context here. Is the argument in regards to mutability? But `T*` in C
allows mutability. If you looked at C++ instead of C, maybe a `const`
pointer would be closer in semantics and behavior.
> below. (Those ampersands mark references in Rust, pointers that have
> additional guarantees [1])
>
>[omitted]
>
> [1]: https://doc.rust-lang.org/std/primitive.reference.html
There is also https://doc.rust-lang.org/reference/types/pointer.html .
But, references must follow certain aliasing rules, and in unsafe Rust,
it is the programmer that has the burden of upholding those aliasing rules,
right?
> For disabling the uniqueness guarantee for `&mut`, we use an official
> "hack" that the Rust language developers are working on replacing with
> a better mechanism (this was also mentioned by Gary above).
Are you referring to `Opaque`?
> > Documentation and related links for `Opaque`:
> > https://rust.docs.kernel.org/kernel/types/struct.Opaque.html
> > https://rust.docs.kernel.org/src/kernel/types.rs.html#307-310
> > https://github.com/Rust-for-Linux/pinned-init
> >
> > It uses `UnsafeCell`, Rust "pinning", and the Rust for Linux library
> > "pinned-init".
>
> pinned-init is not specific to `Opaque` and not really relevant with
> respect to discussing aliasing guarantees.
Is `Opaque` really able to avoid aliasing requirements for users,
without internally using "pinned-init"/derivative or the pinning
feature used in its implementation?
> > "pinned-init" uses a number of experimental, unstable and nightly
> > features of Rust.
>
> This is wrong. It uses no unstable features when you look at the version
> in-tree (at `rust/kernel/init.rs`). The user-space version uses a single
> unstable feature: `allocator_api` for accessing the `AllocError` type
> from the standard library. You can disable the `alloc` feature and use
> it on a stable compiler as written in the readme.
Interesting, I did not realize that the Rust for Linux project uses
a fork or derivative of "pinned-init" in-tree, not "pinned-init" itself.
What I can read in the README.md:
https://github.com/Rust-for-Linux/pinned-init/tree/main
"Nightly Needed for alloc feature
This library requires the allocator_api unstable feature
when the alloc feature is enabled and thus this feature
can only be used with a nightly compiler. When enabling
the alloc feature, the user will be required to activate
allocator_api as well.
The feature is enabled by default, thus by default
pinned-init will require a nightly compiler. However, using
the crate on stable compilers is possible by disabling alloc.
In practice this will require the std feature, because stable
compilers have neither Box nor Arc in no-std mode."
Rust in Linux uses no_std, right? So Rust in Linux would not be
able to use the original "pinned_init" library as it currently is without
using currently nightly/unstable features, until the relevant feature(s)
is stabilized.
> > Working with the library implementation requires having a good
> > understanding of unsafe Rust and many advanced features of Rust.
>
> pinned-init was explicitly designed such that you *don't* have to write
> unsafe code for initializing structures that require pinning from the
> get-go (such as the kernel's mutex).
Sorry, I sought to convey that I was referring to the internal library
implementation, not the usage of the library.
For the library implementation, do you agree that a good
understanding of unsafe Rust and many advanced features
are required to work with the library implementation? Such as
pinning?
> > `Opaque` looks interesting. Do you know if it will become a more
> > widely used abstraction outside the Linux kernel?
>
> Only in projects that do FFI with C/C++ (or other such languages).
> Outside of that the `Opaque` type is rather useless, since it disables
> normal guarantees and makes working with the inner type annoying.
Interesting.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 12:21 ` Ventura Jack
@ 2025-02-24 12:47 ` Benno Lossin
2025-02-24 16:57 ` Ventura Jack
0 siblings, 1 reply; 194+ messages in thread
From: Benno Lossin @ 2025-02-24 12:47 UTC (permalink / raw)
To: Ventura Jack
Cc: Gary Guo, Linus Torvalds, Kent Overstreet, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On 24.02.25 13:21, Ventura Jack wrote:
> On Mon, Feb 24, 2025 at 3:31 AM Benno Lossin <benno.lossin@proton.me> wrote:
>>
>> On 24.02.25 10:57, Ventura Jack wrote:
>>>
>>> In regards to `UnsafeCell`, I believe that you are correct in regards
>>> to mutability. However, if I understand you correctly, and if I
>>> am not mistaken, I believe that you are wrong about `UnsafeCell`
>>> making it possible to opt-out of the aliasing rules. And thus that
>>> `UnsafeCell` does not behave like `T*` in C.
>>
>> `UnsafeCell<T>` does not behave like `T*` in C, because it isn't a
>> pointer. Like Gary said, `&UnsafeCell<T>` behaves like `T*` in C, while
>> `&mut UnsafeCell<T>` does not. That is what you quote from the docs
>> below. (Those ampersands mark references in Rust, pointers that have
>> additional guarantees [1])
>
> From what I can see in the documentation, `&UnsafeCell<T>` also does not
> behave like `T*` in C. In C, especially if "strict aliasing" is turned
> off in the
> compiler, `T*` does not have aliasing requirements. You can have multiple
> C `T*` pointers pointing to the same object, and mutate the same object.
This is true for `&UnsafeCell<T>`. You can have multiple of those and
mutate the same value via only shared references. Note that
`UnsafeCell<T>` is `!Sync`, so it cannot be shared across threads, so
all of those shared references have to be on the same thread. (there is
the `SyncUnsafeCell<T>` type that is `Sync`, so it does allow for
across-thread mutations, but that is much more of a footgun, since you
still have to synchronize the writes/reads)
> The documentation for `UnsafeCell` conversely spends a lot of space
> discussing invariants and aliasing requirements.
Yes, since normally in Rust, you can either have exactly one mutable
reference, or several shared references (which cannot be used to mutate
a value). `UnsafeCell<T>` is essentially a low-level primitive that can
only be used with `unsafe` to build for example a mutex.
> I do not understand why you claim:
>
> "`&UnsafeCell<T>` behaves like `T*` in C,"
>
> That statement is false as far as I can figure out, though I have taken it
> out of context here.
Not sure how you arrived at that conclusion, the following code is legal
and sound Rust:
let val = UnsafeCell::new(42);
let x = &val;
let y = &val;
unsafe {
*x.get() = 0;
*y.get() = 42;
*x.get() = 24;
}
You can't do this with `&mut i32`.
> Is the argument in regards to mutability? But `T*` in C
> allows mutability. If you looked at C++ instead of C, maybe a `const`
> pointer would be closer in semantics and behavior.
>
>> below. (Those ampersands mark references in Rust, pointers that have
>> additional guarantees [1])
>>
>> [omitted]
>>
>> [1]: https://doc.rust-lang.org/std/primitive.reference.html
>
> There is also https://doc.rust-lang.org/reference/types/pointer.html .
Yes that is the description of all primitive pointer types. Both
references and raw pointers.
> But, references must follow certain aliasing rules, and in unsafe Rust,
> it is the programmer that has the burden of upholding those aliasing rules,
> right?
Indeed.
>> For disabling the uniqueness guarantee for `&mut`, we use an official
>> "hack" that the Rust language developers are working on replacing with
>> a better mechanism (this was also mentioned by Gary above).
>
> Are you referring to `Opaque`?
I am referring to the hack used by `Opaque`, it is `!Unpin` which
results in `&mut Opaque<T>` not having the `noalias` attribute.
>>> Documentation and related links for `Opaque`:
>>> https://rust.docs.kernel.org/kernel/types/struct.Opaque.html
>>> https://rust.docs.kernel.org/src/kernel/types.rs.html#307-310
>>> https://github.com/Rust-for-Linux/pinned-init
>>>
>>> It uses `UnsafeCell`, Rust "pinning", and the Rust for Linux library
>>> "pinned-init".
>>
>> pinned-init is not specific to `Opaque` and not really relevant with
>> respect to discussing aliasing guarantees.
>
> Is `Opaque` really able to avoid aliasing requirements for users,
> without internally using "pinned-init"/derivative or the pinning
> feature used in its implementation?
Yes, you can write `Opaque<T>` without using pinned-init. The hack
described above uses `PhantomPinned` to make `Opaque<T>: !Unpin`.
>>> "pinned-init" uses a number of experimental, unstable and nightly
>>> features of Rust.
>>
>> This is wrong. It uses no unstable features when you look at the version
>> in-tree (at `rust/kernel/init.rs`). The user-space version uses a single
>> unstable feature: `allocator_api` for accessing the `AllocError` type
>> from the standard library. You can disable the `alloc` feature and use
>> it on a stable compiler as written in the readme.
>
> Interesting, I did not realize that the Rust for Linux project uses
> a fork or derivative of "pinned-init" in-tree, not "pinned-init" itself.
Yes, that is something that I am working on at the moment.
> What I can read in the README.md:
> https://github.com/Rust-for-Linux/pinned-init/tree/main
>
> "Nightly Needed for alloc feature
>
> This library requires the allocator_api unstable feature
> when the alloc feature is enabled and thus this feature
> can only be used with a nightly compiler. When enabling
> the alloc feature, the user will be required to activate
> allocator_api as well.
>
> The feature is enabled by default, thus by default
> pinned-init will require a nightly compiler. However, using
> the crate on stable compilers is possible by disabling alloc.
> In practice this will require the std feature, because stable
> compilers have neither Box nor Arc in no-std mode."
>
> Rust in Linux uses no_std, right? So Rust in Linux would not be
> able to use the original "pinned_init" library as it currently is without
> using currently nightly/unstable features, until the relevant feature(s)
> is stabilized.
Yes, Rust for Linux uses `#![no_std]` (and also has its own alloc), so
it an use the stable version of pinned-init. However, there are several
differences between the current in-tree version and the user-space
version. I am working on some patches that fix that.
>>> Working with the library implementation requires having a good
>>> understanding of unsafe Rust and many advanced features of Rust.
>>
>> pinned-init was explicitly designed such that you *don't* have to write
>> unsafe code for initializing structures that require pinning from the
>> get-go (such as the kernel's mutex).
>
> Sorry, I sought to convey that I was referring to the internal library
> implementation, not the usage of the library.
Ah I see.
> For the library implementation, do you agree that a good
> understanding of unsafe Rust and many advanced features
> are required to work with the library implementation? Such as
> pinning?
Yes I agree.
---
Cheers,
Benno
>>> `Opaque` looks interesting. Do you know if it will become a more
>>> widely used abstraction outside the Linux kernel?
>>
>> Only in projects that do FFI with C/C++ (or other such languages).
>> Outside of that the `Opaque` type is rather useless, since it disables
>> normal guarantees and makes working with the inner type annoying.
>
> Interesting.
>
> Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-23 15:30 ` Ventura Jack
2025-02-23 16:28 ` David Laight
2025-02-24 0:27 ` Gary Guo
@ 2025-02-24 12:58 ` Theodore Ts'o
2025-02-24 14:47 ` Miguel Ojeda
2025-02-24 15:43 ` Miguel Ojeda
2025-02-25 16:12 ` Alice Ryhl
3 siblings, 2 replies; 194+ messages in thread
From: Theodore Ts'o @ 2025-02-24 12:58 UTC (permalink / raw)
To: Ventura Jack
Cc: Linus Torvalds, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sun, Feb 23, 2025 at 08:30:06AM -0700, Ventura Jack wrote:
> Rust aliasing:
> - Is not a keyword.
> - Applies to certain pointer kinds in Rust, namely
> Rust "references".
> Rust pointer kinds:
> https://doc.rust-lang.org/reference/types/pointer.html
> - Aliasing in Rust is not opt-in or opt-out,
> it is always on.
> https://doc.rust-lang.org/nomicon/aliasing.html
> - Rust has not defined its aliasing model.
> https://doc.rust-lang.org/nomicon/references.html
> "Unfortunately, Rust hasn't actually
> defined its aliasing model.
> While we wait for the Rust devs to specify
> the semantics of their language, let's use
> the next section to discuss what aliasing is
> in general, and why it matters."
Hmm, I wonder if this is the reason of the persistent hostility that I
keep hearing about in the Rust community against alternate
implementations of the Rust compiler, such as the one being developed
using the GCC backend. *Since* the aliasing model hasn't been
developed yet, potential alternate implementations might have
different semantics --- for example, I suspect a GCC-based backend
might *have* a way of opting out of aliasing, much like gcc and clang
has today, and this might cramp rustcc's future choices if the kernel
were to depend on it.
That being said, until Rust supports all of the platforms that the
Linux kernel does has, it means that certain key abstractions can not
be implemented in Rust --- unless we start using a GCC backend for
Rust, or if we were to eject certain platforms from our supported
list, such as m68k or PA-RISC....
- Ted
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 12:58 ` Theodore Ts'o
@ 2025-02-24 14:47 ` Miguel Ojeda
2025-02-24 14:54 ` Miguel Ojeda
2025-02-26 11:38 ` Ralf Jung
2025-02-24 15:43 ` Miguel Ojeda
1 sibling, 2 replies; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-24 14:47 UTC (permalink / raw)
To: Theodore Ts'o
Cc: Ventura Jack, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux, Ralf Jung
On Mon, Feb 24, 2025 at 1:58 PM Theodore Ts'o <tytso@mit.edu> wrote:
>
> Hmm, I wonder if this is the reason of the persistent hostility that I
> keep hearing about in the Rust community against alternate
> implementations of the Rust compiler, such as the one being developed
> using the GCC backend. *Since* the aliasing model hasn't been
I guess you are referring to gccrs, i.e. the new GCC frontend
developed within GCC (the other one, which is a backend,
rustc_codegen_gcc, is part of the Rust project, so no hostility there
I assume).
In any case, yes, there are some people out there that may not agree
with the benefits/costs of implementing a new frontend in, say, GCC.
But that does not imply everyone is hostile. In fact, as far as I
understand, both Rust and gccrs are working together, e.g. see this
recent blog post:
https://blog.rust-lang.org/2024/11/07/gccrs-an-alternative-compiler-for-rust.html
> developed yet, potential alternate implementations might have
> different semantics --- for example, I suspect a GCC-based backend
> might *have* a way of opting out of aliasing, much like gcc and clang
> has today, and this might cramp rustcc's future choices if the kernel
> were to depend on it.
The aliasing model is not fully defined, but you can still develop
unsafe code being conservative, i.e. avoiding to rely on details that
are not established yet and thus could end up being allowed or not.
In addition, the models being researched, like the new Tree Borrows
one I linked above, are developed with existing code in mind, i.e.
they are trying to find a model that does not break the patterns that
people actually want to write. For instance, in the paper they show
how they tested ~670k tests across ~30k crates for conformance to the
new model.
In any case, even if, say, gccrs were to provide a mode that changes
the rules, I doubt we would want to use it, for several reasons, chief
among them because we would want to still compile with `rustc`, but
also because we will probably want the performance, because some
kernel developers may want to share code between userspace and
kernelspace (e.g. for fs tools) and because we may want to eventually
reuse some third-party code (e.g. a compression library).
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 14:47 ` Miguel Ojeda
@ 2025-02-24 14:54 ` Miguel Ojeda
2025-02-24 16:42 ` Philip Herron
2025-02-26 11:38 ` Ralf Jung
1 sibling, 1 reply; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-24 14:54 UTC (permalink / raw)
To: Theodore Ts'o
Cc: Ventura Jack, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux, Ralf Jung, Antoni Boucher,
Arthur Cohen, Philip Herron
On Mon, Feb 24, 2025 at 3:47 PM Miguel Ojeda
<miguel.ojeda.sandonis@gmail.com> wrote:
>
> On Mon, Feb 24, 2025 at 1:58 PM Theodore Ts'o <tytso@mit.edu> wrote:
> >
> > Hmm, I wonder if this is the reason of the persistent hostility that I
> > keep hearing about in the Rust community against alternate
> > implementations of the Rust compiler, such as the one being developed
> > using the GCC backend. *Since* the aliasing model hasn't been
>
> I guess you are referring to gccrs, i.e. the new GCC frontend
> developed within GCC (the other one, which is a backend,
> rustc_codegen_gcc, is part of the Rust project, so no hostility there
> I assume).
>
> In any case, yes, there are some people out there that may not agree
> with the benefits/costs of implementing a new frontend in, say, GCC.
> But that does not imply everyone is hostile. In fact, as far as I
> understand, both Rust and gccrs are working together, e.g. see this
> recent blog post:
>
> https://blog.rust-lang.org/2024/11/07/gccrs-an-alternative-compiler-for-rust.html
Cc'ing Antoni, Arthur and Philip, in case they want to add, clarify
and/or correct me.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 12:58 ` Theodore Ts'o
2025-02-24 14:47 ` Miguel Ojeda
@ 2025-02-24 15:43 ` Miguel Ojeda
2025-02-24 17:24 ` Kent Overstreet
1 sibling, 1 reply; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-24 15:43 UTC (permalink / raw)
To: Theodore Ts'o
Cc: Ventura Jack, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Mon, Feb 24, 2025 at 1:58 PM Theodore Ts'o <tytso@mit.edu> wrote:
>
> That being said, until Rust supports all of the platforms that the
> Linux kernel does has, it means that certain key abstractions can not
> be implemented in Rust --- unless we start using a GCC backend for
> Rust, or if we were to eject certain platforms from our supported
> list, such as m68k or PA-RISC....
By the way, the real constraint here is dropping C code that cannot be
replaced for all existing use cases. That, indeed, cannot happen.
But the "abstractions" (i.e. the Rust code that wraps C) themselves
can be implemented just fine, even if are only called by users under a
few architectures. That is what we are doing, after all.
Similarly, if the kernel were to allow alternative/parallel/duplicate
implementations of a core subsystem, then that would be technically
doable, since the key is not dropping the C code that users use today.
To be clear, I am not saying we do that, just trying to clarify that
the technical constraint is generally dropping C code that cannot be
replaced properly.
We also got the question about future subsystems a few times -- could
they be implemented in Rust without wrapping C? That would simplify
greatly some matters and reduce the amount of unsafe code. However, if
the code is supposed to be used by everybody, then that would make
some architectures second-class citizens, even if they do not have
users depending on that feature today, and thus it may be better to
wait until GCC gets to the right point before attempting something
like that.
That is my understanding, at least -- I hope that clarifies.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 14:54 ` Miguel Ojeda
@ 2025-02-24 16:42 ` Philip Herron
2025-02-25 15:55 ` Ventura Jack
0 siblings, 1 reply; 194+ messages in thread
From: Philip Herron @ 2025-02-24 16:42 UTC (permalink / raw)
To: Miguel Ojeda
Cc: Theodore Ts'o, Ventura Jack, Linus Torvalds, Kent Overstreet,
Gary Guo, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, hpa, ksummit, linux-kernel, rust-for-linux, Ralf Jung,
Antoni Boucher, Arthur Cohen
On Mon, 24 Feb 2025 at 14:54, Miguel Ojeda
<miguel.ojeda.sandonis@gmail.com> wrote:
>
> On Mon, Feb 24, 2025 at 3:47 PM Miguel Ojeda
> <miguel.ojeda.sandonis@gmail.com> wrote:
> >
> > On Mon, Feb 24, 2025 at 1:58 PM Theodore Ts'o <tytso@mit.edu> wrote:
> > >
> > > Hmm, I wonder if this is the reason of the persistent hostility that I
> > > keep hearing about in the Rust community against alternate
> > > implementations of the Rust compiler, such as the one being developed
> > > using the GCC backend. *Since* the aliasing model hasn't been
> >
> > I guess you are referring to gccrs, i.e. the new GCC frontend
> > developed within GCC (the other one, which is a backend,
> > rustc_codegen_gcc, is part of the Rust project, so no hostility there
> > I assume).
> >
> > In any case, yes, there are some people out there that may not agree
> > with the benefits/costs of implementing a new frontend in, say, GCC.
> > But that does not imply everyone is hostile. In fact, as far as I
> > understand, both Rust and gccrs are working together, e.g. see this
> > recent blog post:
> >
> > https://blog.rust-lang.org/2024/11/07/gccrs-an-alternative-compiler-for-rust.html
>
> Cc'ing Antoni, Arthur and Philip, in case they want to add, clarify
> and/or correct me.
>
> Cheers,
> Miguel
Resending in plain text mode for the ML.
My 50 cents here is that gccrs is trying to follow rustc as a guide, and
there are a lot of assumptions in libcore about the compiler, such as lang
items, that we need to follow in order to compile Rust code. I don't have
objections to opt-out flags of some kind, so long as it's opt-out and people
know it will break things. But it's really not something I care about right
now. We wouldn't accept patches to do that at the moment because it would
just make it harder for us to get this right. It wouldn’t help us or Rust for
Linux—it would just add confusion.
As for hostility, yeah, it's been a pet peeve of mine because this is a
passion project for me. Ultimately, it doesn't matter—I want to get gccrs
out, and we are very lucky to be supported to work on this (Open Source
Security and Embecosm). Between code-gen-gcc, Rust for Linux, and gccrs, we
are all friends. We've all had a great time together—long may it continue!
Thanks
--Phil
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 12:47 ` Benno Lossin
@ 2025-02-24 16:57 ` Ventura Jack
2025-02-24 22:03 ` Benno Lossin
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-24 16:57 UTC (permalink / raw)
To: Benno Lossin
Cc: Gary Guo, Linus Torvalds, Kent Overstreet, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Mon, Feb 24, 2025 at 5:47 AM Benno Lossin <benno.lossin@proton.me> wrote:
>
> On 24.02.25 13:21, Ventura Jack wrote:
> >
> > From what I can see in the documentation, `&UnsafeCell<T>` also does not
> > behave like `T*` in C. In C, especially if "strict aliasing" is turned
> > off in the
> > compiler, `T*` does not have aliasing requirements. You can have multiple
> > C `T*` pointers pointing to the same object, and mutate the same object.
>
> This is true for `&UnsafeCell<T>`. You can have multiple of those and
> mutate the same value via only shared references. Note that
> `UnsafeCell<T>` is `!Sync`, so it cannot be shared across threads, so
> all of those shared references have to be on the same thread. (there is
> the `SyncUnsafeCell<T>` type that is `Sync`, so it does allow for
> across-thread mutations, but that is much more of a footgun, since you
> still have to synchronize the writes/reads)
>
> > The documentation for `UnsafeCell` conversely spends a lot of space
> > discussing invariants and aliasing requirements.
>
> Yes, since normally in Rust, you can either have exactly one mutable
> reference, or several shared references (which cannot be used to mutate
> a value). `UnsafeCell<T>` is essentially a low-level primitive that can
> only be used with `unsafe` to build for example a mutex.
>
> > I do not understand why you claim:
> >
> > "`&UnsafeCell<T>` behaves like `T*` in C,"
> >
> > That statement is false as far as I can figure out, though I have taken it
> > out of context here.
>
> Not sure how you arrived at that conclusion, the following code is legal
> and sound Rust:
>
> let val = UnsafeCell::new(42);
> let x = &val;
> let y = &val;
> unsafe {
> *x.get() = 0;
> *y.get() = 42;
> *x.get() = 24;
> }
>
> You can't do this with `&mut i32`.
I think I see what you mean. The specific Rust "const reference"
`&UnsafeCell<T>` sort of behaves like C `T*`. But you have to get a
Rust "mutable raw pointer" `*mut T` when working with it using
`UnsafeCell::get()`. And you have to be careful with lifetimes if you
do any casts or share it or certain other things. And to dereference a
Rust "mutable raw pointer", you must use unsafe Rust. And you have to
understand aliasing.
One example I tested against MIRI:
use std::cell::UnsafeCell;
fn main() {
let val: UnsafeCell<i32> = UnsafeCell::new(42);
let x: & UnsafeCell<i32> = &val;
let y: & UnsafeCell<i32> = &val;
unsafe {
// UB.
//let pz: & i32 = & *val.get();
// UB.
//let pz: &mut i32 = &mut *val.get();
// Okay.
//let pz: *const i32 = &raw const *val.get();
// Okay.
let pz: *mut i32 = &raw mut *val.get();
let px: *mut i32 = x.get();
let py: *mut i32 = y.get();
*px = 0;
*py += 42;
*px += 24;
println!("x, y, z: {}, {}, {}", *px, *py, *pz);
}
}
It makes sense that the Rust "raw pointers" `*const i32` and `*mut
i32` are fine here, and that taking Rust "references" `& i32` and
`&mut i32` causes UB, since Rust "references" have aliasing rules that
must be followed.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 15:43 ` Miguel Ojeda
@ 2025-02-24 17:24 ` Kent Overstreet
0 siblings, 0 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-24 17:24 UTC (permalink / raw)
To: Miguel Ojeda
Cc: Theodore Ts'o, Ventura Jack, Linus Torvalds, Gary Guo,
airlied, boqun.feng, david.laight.linux, ej, gregkh, hch, hpa,
ksummit, linux-kernel, rust-for-linux
On Mon, Feb 24, 2025 at 04:43:46PM +0100, Miguel Ojeda wrote:
> We also got the question about future subsystems a few times -- could
> they be implemented in Rust without wrapping C? That would simplify
> greatly some matters and reduce the amount of unsafe code. However, if
> the code is supposed to be used by everybody, then that would make
> some architectures second-class citizens, even if they do not have
> users depending on that feature today, and thus it may be better to
> wait until GCC gets to the right point before attempting something
> like that.
If gccrs solves the archictecture issues, this would be nice - because
from what I've seen the FFI issues look easier and less error prone when
Rust is the one underneath.
There are some subtle gotchas w.r.t. lifetimes at FFI boundaries that
the compiler can't warn about - because that's where you translate to
raw untracked pointers.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 16:57 ` Ventura Jack
@ 2025-02-24 22:03 ` Benno Lossin
2025-02-24 23:04 ` Ventura Jack
0 siblings, 1 reply; 194+ messages in thread
From: Benno Lossin @ 2025-02-24 22:03 UTC (permalink / raw)
To: Ventura Jack
Cc: Gary Guo, Linus Torvalds, Kent Overstreet, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On 24.02.25 17:57, Ventura Jack wrote:
> On Mon, Feb 24, 2025 at 5:47 AM Benno Lossin <benno.lossin@proton.me> wrote:
>>
>> On 24.02.25 13:21, Ventura Jack wrote:
>>>
>>> From what I can see in the documentation, `&UnsafeCell<T>` also does not
>>> behave like `T*` in C. In C, especially if "strict aliasing" is turned
>>> off in the
>>> compiler, `T*` does not have aliasing requirements. You can have multiple
>>> C `T*` pointers pointing to the same object, and mutate the same object.
>>
>> This is true for `&UnsafeCell<T>`. You can have multiple of those and
>> mutate the same value via only shared references. Note that
>> `UnsafeCell<T>` is `!Sync`, so it cannot be shared across threads, so
>> all of those shared references have to be on the same thread. (there is
>> the `SyncUnsafeCell<T>` type that is `Sync`, so it does allow for
>> across-thread mutations, but that is much more of a footgun, since you
>> still have to synchronize the writes/reads)
>>
>>> The documentation for `UnsafeCell` conversely spends a lot of space
>>> discussing invariants and aliasing requirements.
>>
>> Yes, since normally in Rust, you can either have exactly one mutable
>> reference, or several shared references (which cannot be used to mutate
>> a value). `UnsafeCell<T>` is essentially a low-level primitive that can
>> only be used with `unsafe` to build for example a mutex.
>>
>>> I do not understand why you claim:
>>>
>>> "`&UnsafeCell<T>` behaves like `T*` in C,"
>>>
>>> That statement is false as far as I can figure out, though I have taken it
>>> out of context here.
>>
>> Not sure how you arrived at that conclusion, the following code is legal
>> and sound Rust:
>>
>> let val = UnsafeCell::new(42);
>> let x = &val;
>> let y = &val;
>> unsafe {
>> *x.get() = 0;
>> *y.get() = 42;
>> *x.get() = 24;
>> }
>>
>> You can't do this with `&mut i32`.
>
> I think I see what you mean. The specific Rust "const reference"
> `&UnsafeCell<T>` sort of behaves like C `T*`. But you have to get a
> Rust "mutable raw pointer" `*mut T` when working with it using
> `UnsafeCell::get()`.
Exactly, you always have to use a raw pointer (as a reference would
immediately run into the aliasing issue), but while writing to the same
memory location, another `&UnsafeCell<T>` may still exist.
> And you have to be careful with lifetimes if you
> do any casts or share it or certain other things. And to dereference a
> Rust "mutable raw pointer", you must use unsafe Rust. And you have to
> understand aliasing.
Yes.
> One example I tested against MIRI:
>
> use std::cell::UnsafeCell;
>
> fn main() {
>
> let val: UnsafeCell<i32> = UnsafeCell::new(42);
> let x: & UnsafeCell<i32> = &val;
> let y: & UnsafeCell<i32> = &val;
>
> unsafe {
>
> // UB.
> //let pz: & i32 = & *val.get();
>
> // UB.
> //let pz: &mut i32 = &mut *val.get();
>
> // Okay.
> //let pz: *const i32 = &raw const *val.get();
>
> // Okay.
> let pz: *mut i32 = &raw mut *val.get();
>
> let px: *mut i32 = x.get();
> let py: *mut i32 = y.get();
>
> *px = 0;
> *py += 42;
> *px += 24;
>
> println!("x, y, z: {}, {}, {}", *px, *py, *pz);
> }
> }
>
> It makes sense that the Rust "raw pointers" `*const i32` and `*mut
> i32` are fine here, and that taking Rust "references" `& i32` and
> `&mut i32` causes UB, since Rust "references" have aliasing rules that
> must be followed.
So it depends on what exactly you do, since if you just uncomment one of
the "UB" lines, the variable never gets used and thus no actual UB
happens. But if you were to do this:
let x = UnsafeCell::new(42);
let y = unsafe { &mut *x.get() };
let z = unsafe { &*x.get() };
println!("{z}");
*y = 0;
println!("{z}");
Then you have UB, since the value that `z` points at changed (this is
obviously not allowed for shared references [^1]).
[^1]: Except of course values that lie behind `UnsafeCell` inside of the
value. For example:
struct Foo {
a: i32,
b: UnsafeCell<i32>,
}
when you have a `&Foo`, you can be sure that the value of `a`
stays the same, but the value of `b` might change during the
lifetime of that reference.
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 22:03 ` Benno Lossin
@ 2025-02-24 23:04 ` Ventura Jack
2025-02-25 22:38 ` Benno Lossin
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-24 23:04 UTC (permalink / raw)
To: Benno Lossin
Cc: Gary Guo, Linus Torvalds, Kent Overstreet, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Mon, Feb 24, 2025 at 3:03 PM Benno Lossin <benno.lossin@proton.me> wrote:
>
> On 24.02.25 17:57, Ventura Jack wrote:
> > One example I tested against MIRI:
> >
> > use std::cell::UnsafeCell;
> >
> > fn main() {
> >
> > let val: UnsafeCell<i32> = UnsafeCell::new(42);
> > let x: & UnsafeCell<i32> = &val;
> > let y: & UnsafeCell<i32> = &val;
> >
> > unsafe {
> >
> > // UB.
> > //let pz: & i32 = & *val.get();
> >
> > // UB.
> > //let pz: &mut i32 = &mut *val.get();
> >
> > // Okay.
> > //let pz: *const i32 = &raw const *val.get();
> >
> > // Okay.
> > let pz: *mut i32 = &raw mut *val.get();
> >
> > let px: *mut i32 = x.get();
> > let py: *mut i32 = y.get();
> >
> > *px = 0;
> > *py += 42;
> > *px += 24;
> >
> > println!("x, y, z: {}, {}, {}", *px, *py, *pz);
> > }
> > }
> >
> > It makes sense that the Rust "raw pointers" `*const i32` and `*mut
> > i32` are fine here, and that taking Rust "references" `& i32` and
> > `&mut i32` causes UB, since Rust "references" have aliasing rules that
> > must be followed.
>
> So it depends on what exactly you do, since if you just uncomment one of
> the "UB" lines, the variable never gets used and thus no actual UB
> happens. But if you were to do this:
I did actually test it against MIRI with only one line commented in at
a time, and the UB lines did give UB according to MIRI, I did not
explain that. It feels a lot like juggling with very sharp knives, but
I already knew that, because the Rust community generally does a great
job of warning people against unsafe. MIRI is very good, but it cannot
catch everything, so it cannot be relied upon in general. And MIRI
shares some of the advantages and disadvantages of sanitizers for C.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
[not found] <CAFJgqgRZ1w0ONj2wbcczx2=boXYHoLOd=-ke7tHGBAcifSfPUw@mail.gmail.com>
@ 2025-02-25 15:42 ` H. Peter Anvin
2025-02-25 16:45 ` Ventura Jack
0 siblings, 1 reply; 194+ messages in thread
From: H. Peter Anvin @ 2025-02-25 15:42 UTC (permalink / raw)
To: Ventura Jack, torvalds
Cc: airlied, boqun.feng, david.laight.linux, ej, gregkh, hch, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On February 22, 2025 2:03:48 AM PST, Ventura Jack <venturajack85@gmail.com> wrote:
>>Gcc used to initialize it all, but as of gcc-15 it apparently says
>>"Oh, the standard allows this crazy behavior, so we'll do it by
>default".
>>
>>Yeah. People love to talk about "safe C", but compiler people have
>>actively tried to make C unsafer for decades. The C standards
>>committee has been complicit. I've ranted about the crazy C alias
>>rules before.
>
>Unsafe Rust actually has way stricter rules for aliasing than C. For you
>and others who don't like C's aliasing, it may be best to avoid unsafe Rust.
From what I was reading in this tree, Rust doesn't actually have any rules at all?!
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 16:42 ` Philip Herron
@ 2025-02-25 15:55 ` Ventura Jack
2025-02-25 17:30 ` Arthur Cohen
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-25 15:55 UTC (permalink / raw)
To: Philip Herron
Cc: Miguel Ojeda, Theodore Ts'o, Linus Torvalds, Kent Overstreet,
Gary Guo, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, hpa, ksummit, linux-kernel, rust-for-linux, Ralf Jung,
Antoni Boucher, Arthur Cohen
On Mon, Feb 24, 2025 at 9:42 AM Philip Herron
<herron.philip@googlemail.com> wrote:
> My 50 cents here is that gccrs is trying to follow rustc as a guide, and
> there are a lot of assumptions in libcore about the compiler, such as lang
> items, that we need to follow in order to compile Rust code. [Omitted]
>
> Thanks
>
> --Phil
Is this snippet from the Rust standard library an example of one
of the assumptions about the compiler that the Rust standard library
makes? The code explicitly assumes that LLVM is the backend of
the compiler.
https://github.com/rust-lang/rust/blob/master/library/core/src/ffi/va_list.rs#L292-L301
// FIXME: this should call `va_end`, but there's no clean way to
// guarantee that `drop` always gets inlined into its caller,
// so the `va_end` would get directly called from the same function as
// the corresponding `va_copy`. `man va_end` states that C
requires this,
// and LLVM basically follows the C semantics, so we need to make sure
// that `va_end` is always called from the same function as `va_copy`.
// For more details, see https://github.com/rust-lang/rust/pull/59625
// and https://llvm.org/docs/LangRef.html#llvm-va-end-intrinsic.
//
// This works for now, since `va_end` is a no-op on all
current LLVM targets.
How do you approach, or plan to approach, code like the above in gccrs?
Maybe make a fork of the Rust standard library that only replaces the
LLVM-dependent parts of the code? I do not know how widespread
LLVM-dependent code is in the Rust standard library, nor how
well-documented the dependence on LLVM typically is. In the above
case, it is well-documented.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-23 15:30 ` Ventura Jack
` (2 preceding siblings ...)
2025-02-24 12:58 ` Theodore Ts'o
@ 2025-02-25 16:12 ` Alice Ryhl
2025-02-25 17:21 ` Ventura Jack
2025-02-25 18:54 ` Linus Torvalds
3 siblings, 2 replies; 194+ messages in thread
From: Alice Ryhl @ 2025-02-25 16:12 UTC (permalink / raw)
To: Ventura Jack
Cc: Linus Torvalds, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Sun, Feb 23, 2025 at 4:30 PM Ventura Jack <venturajack85@gmail.com> wrote:
>
> Just to be clear and avoid confusion, I would
> like to clarify some aspects of aliasing.
> In case that you do not already know about this,
> I suspect that you may find it very valuable.
>
> I am not an expert at Rust, so for any Rust experts
> out there, please feel free to point out any errors
> or mistakes that I make in the following.
>
> The Rustonomicon is (as I gather) the semi-official
> documentation site for unsafe Rust.
>
> Aliasing in C and Rust:
>
> C "strict aliasing":
> - Is not a keyword.
> - Based on "type compatibility".
> - Is turned off by default in the kernel by using
> a compiler flag.
>
> C "restrict":
> - Is a keyword, applied to pointers.
> - Is opt-in to a kind of aliasing.
> - Is seldom used in practice, since many find
> it difficult to use correctly and avoid
> undefined behavior.
>
> Rust aliasing:
> - Is not a keyword.
> - Applies to certain pointer kinds in Rust, namely
> Rust "references".
> Rust pointer kinds:
> https://doc.rust-lang.org/reference/types/pointer.html
> - Aliasing in Rust is not opt-in or opt-out,
> it is always on.
> https://doc.rust-lang.org/nomicon/aliasing.html
> - Rust has not defined its aliasing model.
> https://doc.rust-lang.org/nomicon/references.html
> "Unfortunately, Rust hasn't actually
> defined its aliasing model.
> While we wait for the Rust devs to specify
> the semantics of their language, let's use
> the next section to discuss what aliasing is
> in general, and why it matters."
> There is active experimental research on
> defining the aliasing model, including tree borrows
> and stacked borrows.
> - The aliasing model not being defined makes
> it harder to reason about and work with
> unsafe Rust, and therefore harder to avoid
> undefined behavior/memory safety bugs.
I think all of this worrying about Rust not having defined its
aliasing model is way overblown. Ultimately, the status quo is that
each unsafe operation that has to do with aliasing falls into one of
three categories:
* This is definitely allowed.
* This is definitely UB.
* We don't know whether we want to allow this yet.
The full aliasing model that they want would eliminate the third
category. But for practical purposes you just stay within the first
subset and you will be happy.
Alice
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 15:42 ` H. Peter Anvin
@ 2025-02-25 16:45 ` Ventura Jack
0 siblings, 0 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-25 16:45 UTC (permalink / raw)
To: H. Peter Anvin
Cc: torvalds, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, ksummit, linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On Tue, Feb 25, 2025 at 8:42 AM H. Peter Anvin <hpa@zytor.com> wrote:
>
> On February 22, 2025 2:03:48 AM PST, Ventura Jack <venturajack85@gmail.com> wrote:
> >>Gcc used to initialize it all, but as of gcc-15 it apparently says
> >>"Oh, the standard allows this crazy behavior, so we'll do it by
> >default".
> >>
> >>Yeah. People love to talk about "safe C", but compiler people have
> >>actively tried to make C unsafer for decades. The C standards
> >>committee has been complicit. I've ranted about the crazy C alias
> >>rules before.
> >
> >Unsafe Rust actually has way stricter rules for aliasing than C. For you
> >and others who don't like C's aliasing, it may be best to avoid unsafe Rust.
>
> From what I was reading in this tree, Rust doesn't actually have any rules at all?!
One way to describe it may be that Rust currently has no full
official rules for aliasing, and no full specification. There are
multiple experimental research models, including stacked
borrows and tree borrows, and work on trying to officially
figure out, model, and specify the rules. Currently, people
loosely and unofficially assume some rules, as I understand
it, often with conservative assumptions of what the rules
are or could be, as Miguel Ojeda discussed. I do not know
if there is any official partial specification of the aliasing
rules, apart from the general Rust documentation.
The unofficial aliasing rules that a Rust compiler
implementation uses, have to be followed when writing
unsafe Rust, otherwise you may get undefined behavior
and memory safety bugs. Some people have argued that
a lack of specification of the aliasing rules for Rust is one
reason why writing unsafe Rust is harder than C, among
other reasons.
A lot of Rust developers use MIRI, but MIRI cannot catch
everything. One version of MIRI explicitly mentions that it
uses stacked borrows as one rule set, and MIRI also
mentions that its stacked borrow rules are still experimental:
"= help: this indicates a potential bug in the program: it
performed an invalid operation, but the Stacked Borrows
rules it violated are still experimental
= help: see
https://github.com/rust-lang/unsafe-code-guidelines/blob/master/wip/stacked-borrows.md
for further information"
There is only one major compiler for Rust so far, rustc,
and rustc has LLVM as a primary backend. I do not know
the status of rustc's other backends. gccrs is another
compiler for Rust that is a work in progress, Philip
Herron (read also his email in the tree) and others are
working on gccrs as I understand it.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 16:12 ` Alice Ryhl
@ 2025-02-25 17:21 ` Ventura Jack
2025-02-25 17:36 ` Alice Ryhl
2025-02-25 18:54 ` Linus Torvalds
1 sibling, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-25 17:21 UTC (permalink / raw)
To: Alice Ryhl
Cc: Linus Torvalds, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Tue, Feb 25, 2025 at 9:12 AM Alice Ryhl <aliceryhl@google.com> wrote:
>
> On Sun, Feb 23, 2025 at 4:30 PM Ventura Jack <venturajack85@gmail.com> wrote:
> >
> > Just to be clear and avoid confusion, I would
> > like to clarify some aspects of aliasing.
> > In case that you do not already know about this,
> > I suspect that you may find it very valuable.
> >
> > I am not an expert at Rust, so for any Rust experts
> > out there, please feel free to point out any errors
> > or mistakes that I make in the following.
> >
> > The Rustonomicon is (as I gather) the semi-official
> > documentation site for unsafe Rust.
> >
> > Aliasing in C and Rust:
> >
> > C "strict aliasing":
> > - Is not a keyword.
> > - Based on "type compatibility".
> > - Is turned off by default in the kernel by using
> > a compiler flag.
> >
> > C "restrict":
> > - Is a keyword, applied to pointers.
> > - Is opt-in to a kind of aliasing.
> > - Is seldom used in practice, since many find
> > it difficult to use correctly and avoid
> > undefined behavior.
> >
> > Rust aliasing:
> > - Is not a keyword.
> > - Applies to certain pointer kinds in Rust, namely
> > Rust "references".
> > Rust pointer kinds:
> > https://doc.rust-lang.org/reference/types/pointer.html
> > - Aliasing in Rust is not opt-in or opt-out,
> > it is always on.
> > https://doc.rust-lang.org/nomicon/aliasing.html
> > - Rust has not defined its aliasing model.
> > https://doc.rust-lang.org/nomicon/references.html
> > "Unfortunately, Rust hasn't actually
> > defined its aliasing model.
> > While we wait for the Rust devs to specify
> > the semantics of their language, let's use
> > the next section to discuss what aliasing is
> > in general, and why it matters."
> > There is active experimental research on
> > defining the aliasing model, including tree borrows
> > and stacked borrows.
> > - The aliasing model not being defined makes
> > it harder to reason about and work with
> > unsafe Rust, and therefore harder to avoid
> > undefined behavior/memory safety bugs.
>
> I think all of this worrying about Rust not having defined its
> aliasing model is way overblown. Ultimately, the status quo is that
> each unsafe operation that has to do with aliasing falls into one of
> three categories:
>
> * This is definitely allowed.
> * This is definitely UB.
> * We don't know whether we want to allow this yet.
>
> The full aliasing model that they want would eliminate the third
> category. But for practical purposes you just stay within the first
> subset and you will be happy.
>
> Alice
Is there a specification for aliasing that defines your first bullet
point, that people can read and use, as a kind of partial
specification? Or maybe a subset of your first bullet point, as a
conservative partial specification? I am guessing that stacked
borrows or tree borrows might be useful for such a purpose.
But I do not know whether either of stacked borrows or tree
borrows have only false positives, only false negatives, or both.
For Rust documentation, I have heard of the official
documentation websites at
https://doc.rust-lang.org/book/
https://doc.rust-lang.org/nomicon/
And various blogs, forums and research papers.
If there is no such conservative partial specification for
aliasing yet, I wonder if such a conservative partial
specification could be made with relative ease, especially if
it is very conservative, at least in its first draft. Though there
is currently no specification of the Rust language and just
one major compiler.
I know that Java defines an additional conservative reasoning
model for its memory model that is easier to reason about
than the full memory model, namely happens-before
relationship. That conservative reasoning model is taught in
official Java documentation and in books.
On the topic of difficulty, even if there was a full specification,
it might still be difficult to work with aliasing in unsafe Rust.
For C "restrict", I assume that "restrict" is fully specified, and
C developers still typically avoid "restrict". And for unsafe
Rust, the Rust community helpfully encourages people to
avoid unsafe Rust when possible due to its difficulty.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 15:55 ` Ventura Jack
@ 2025-02-25 17:30 ` Arthur Cohen
0 siblings, 0 replies; 194+ messages in thread
From: Arthur Cohen @ 2025-02-25 17:30 UTC (permalink / raw)
To: Ventura Jack, Philip Herron
Cc: Miguel Ojeda, Theodore Ts'o, Linus Torvalds, Kent Overstreet,
Gary Guo, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, hpa, ksummit, linux-kernel, rust-for-linux, Ralf Jung,
Antoni Boucher
Hi!
On 2/25/25 4:55 PM, Ventura Jack wrote:
> On Mon, Feb 24, 2025 at 9:42 AM Philip Herron
> <herron.philip@googlemail.com> wrote:
>> My 50 cents here is that gccrs is trying to follow rustc as a guide, and
>> there are a lot of assumptions in libcore about the compiler, such as lang
>> items, that we need to follow in order to compile Rust code. [Omitted]
>>
>> Thanks
>>
>> --Phil
>
> Is this snippet from the Rust standard library an example of one
> of the assumptions about the compiler that the Rust standard library
> makes? The code explicitly assumes that LLVM is the backend of
> the compiler.
>
> https://github.com/rust-lang/rust/blob/master/library/core/src/ffi/va_list.rs#L292-L301
>
> // FIXME: this should call `va_end`, but there's no clean way to
> // guarantee that `drop` always gets inlined into its caller,
> // so the `va_end` would get directly called from the same function as
> // the corresponding `va_copy`. `man va_end` states that C
> requires this,
> // and LLVM basically follows the C semantics, so we need to make sure
> // that `va_end` is always called from the same function as `va_copy`.
> // For more details, see https://github.com/rust-lang/rust/pull/59625
> // and https://llvm.org/docs/LangRef.html#llvm-va-end-intrinsic.
> //
> // This works for now, since `va_end` is a no-op on all
> current LLVM targets.
>
> How do you approach, or plan to approach, code like the above in gccrs?
> Maybe make a fork of the Rust standard library that only replaces the
> LLVM-dependent parts of the code? I do not know how widespread
> LLVM-dependent code is in the Rust standard library, nor how
> well-documented the dependence on LLVM typically is. In the above
> case, it is well-documented.
>
> Best, VJ.
Things like that can be special-cased somewhat easily without
necessarily forking the Rust standard library, which would make a lot of
things a lot more difficult for us and would also not align with our
objectives of not creating a rift in the Rust ecosystem.
The `VaListImpl` is a lang item in recent Rust versions as well as the
one we currently target, which means it is a special type that the
compiler has to know about, and that we can easily access its methods or
trait implementation and add special consideration for instances of this
type directly from the frontend. If we need to add a call to `va_end`
anytime one of these is created, then we'll do so.
We will take special care to ensure that the code produced by gccrs
matches the behavior of the code produced by rustc. To us, having the
same behavior as rustc does not just mean behaving the same way when
compiling code but also creating executables and libraries that behave
the same way. We have already started multiple efforts towards comparing
the behavior of rustc and gccrs and plan to continue working on this in
the future to ensure maximum compatibility.
Kindly,
Arthur
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 17:21 ` Ventura Jack
@ 2025-02-25 17:36 ` Alice Ryhl
2025-02-25 18:16 ` H. Peter Anvin
2025-02-26 12:36 ` Ventura Jack
0 siblings, 2 replies; 194+ messages in thread
From: Alice Ryhl @ 2025-02-25 17:36 UTC (permalink / raw)
To: Ventura Jack
Cc: Linus Torvalds, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Tue, Feb 25, 2025 at 6:21 PM Ventura Jack <venturajack85@gmail.com> wrote:
>
> On Tue, Feb 25, 2025 at 9:12 AM Alice Ryhl <aliceryhl@google.com> wrote:
> >
> > On Sun, Feb 23, 2025 at 4:30 PM Ventura Jack <venturajack85@gmail.com> wrote:
> > >
> > > Just to be clear and avoid confusion, I would
> > > like to clarify some aspects of aliasing.
> > > In case that you do not already know about this,
> > > I suspect that you may find it very valuable.
> > >
> > > I am not an expert at Rust, so for any Rust experts
> > > out there, please feel free to point out any errors
> > > or mistakes that I make in the following.
> > >
> > > The Rustonomicon is (as I gather) the semi-official
> > > documentation site for unsafe Rust.
> > >
> > > Aliasing in C and Rust:
> > >
> > > C "strict aliasing":
> > > - Is not a keyword.
> > > - Based on "type compatibility".
> > > - Is turned off by default in the kernel by using
> > > a compiler flag.
> > >
> > > C "restrict":
> > > - Is a keyword, applied to pointers.
> > > - Is opt-in to a kind of aliasing.
> > > - Is seldom used in practice, since many find
> > > it difficult to use correctly and avoid
> > > undefined behavior.
> > >
> > > Rust aliasing:
> > > - Is not a keyword.
> > > - Applies to certain pointer kinds in Rust, namely
> > > Rust "references".
> > > Rust pointer kinds:
> > > https://doc.rust-lang.org/reference/types/pointer.html
> > > - Aliasing in Rust is not opt-in or opt-out,
> > > it is always on.
> > > https://doc.rust-lang.org/nomicon/aliasing.html
> > > - Rust has not defined its aliasing model.
> > > https://doc.rust-lang.org/nomicon/references.html
> > > "Unfortunately, Rust hasn't actually
> > > defined its aliasing model.
> > > While we wait for the Rust devs to specify
> > > the semantics of their language, let's use
> > > the next section to discuss what aliasing is
> > > in general, and why it matters."
> > > There is active experimental research on
> > > defining the aliasing model, including tree borrows
> > > and stacked borrows.
> > > - The aliasing model not being defined makes
> > > it harder to reason about and work with
> > > unsafe Rust, and therefore harder to avoid
> > > undefined behavior/memory safety bugs.
> >
> > I think all of this worrying about Rust not having defined its
> > aliasing model is way overblown. Ultimately, the status quo is that
> > each unsafe operation that has to do with aliasing falls into one of
> > three categories:
> >
> > * This is definitely allowed.
> > * This is definitely UB.
> > * We don't know whether we want to allow this yet.
> >
> > The full aliasing model that they want would eliminate the third
> > category. But for practical purposes you just stay within the first
> > subset and you will be happy.
> >
> > Alice
>
> Is there a specification for aliasing that defines your first bullet
> point, that people can read and use, as a kind of partial
> specification? Or maybe a subset of your first bullet point, as a
> conservative partial specification? I am guessing that stacked
> borrows or tree borrows might be useful for such a purpose.
> But I do not know whether either of stacked borrows or tree
> borrows have only false positives, only false negatives, or both.
In general I would say read the standard library docs. But I don't
know of a single resource with everything in one place.
Stacked borrows and tree borrows are attempts at creating a full model
that puts everything in the two first categories. They are not
conservative partial specifications.
> For Rust documentation, I have heard of the official
> documentation websites at
>
> https://doc.rust-lang.org/book/
> https://doc.rust-lang.org/nomicon/
>
> And various blogs, forums and research papers.
>
> If there is no such conservative partial specification for
> aliasing yet, I wonder if such a conservative partial
> specification could be made with relative ease, especially if
> it is very conservative, at least in its first draft. Though there
> is currently no specification of the Rust language and just
> one major compiler.
>
> I know that Java defines an additional conservative reasoning
> model for its memory model that is easier to reason about
> than the full memory model, namely happens-before
> relationship. That conservative reasoning model is taught in
> official Java documentation and in books.
On the topic of conservative partial specifications, I like the blog
post "Tower of weakenings" from back when the strict provenance APIs
were started, which I will share together with a quote from it:
> Instead, we should have a tower of Memory Models, with the ones at the top being “what users should think about and try to write their code against”. As you descend the tower, the memory models become increasingly complex or vague but critically always more permissive than the ones above it. At the bottom of the tower is “whatever the compiler actually does” (and arguably “whatever the hardware actually does” in the basement, if you care about that).
> https://faultlore.com/blah/tower-of-weakenings/
You can also read the docs for the ptr module:
https://doc.rust-lang.org/stable/std/ptr/index.html
> On the topic of difficulty, even if there was a full specification,
> it might still be difficult to work with aliasing in unsafe Rust.
> For C "restrict", I assume that "restrict" is fully specified, and
> C developers still typically avoid "restrict". And for unsafe
> Rust, the Rust community helpfully encourages people to
> avoid unsafe Rust when possible due to its difficulty.
This I will not object to :)
Alice
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 17:36 ` Alice Ryhl
@ 2025-02-25 18:16 ` H. Peter Anvin
2025-02-25 20:21 ` Kent Overstreet
2025-02-26 12:36 ` Ventura Jack
1 sibling, 1 reply; 194+ messages in thread
From: H. Peter Anvin @ 2025-02-25 18:16 UTC (permalink / raw)
To: Alice Ryhl, Ventura Jack
Cc: Linus Torvalds, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On February 25, 2025 9:36:07 AM PST, Alice Ryhl <aliceryhl@google.com> wrote:
>On Tue, Feb 25, 2025 at 6:21 PM Ventura Jack <venturajack85@gmail.com> wrote:
>>
>> On Tue, Feb 25, 2025 at 9:12 AM Alice Ryhl <aliceryhl@google.com> wrote:
>> >
>> > On Sun, Feb 23, 2025 at 4:30 PM Ventura Jack <venturajack85@gmail.com> wrote:
>> > >
>> > > Just to be clear and avoid confusion, I would
>> > > like to clarify some aspects of aliasing.
>> > > In case that you do not already know about this,
>> > > I suspect that you may find it very valuable.
>> > >
>> > > I am not an expert at Rust, so for any Rust experts
>> > > out there, please feel free to point out any errors
>> > > or mistakes that I make in the following.
>> > >
>> > > The Rustonomicon is (as I gather) the semi-official
>> > > documentation site for unsafe Rust.
>> > >
>> > > Aliasing in C and Rust:
>> > >
>> > > C "strict aliasing":
>> > > - Is not a keyword.
>> > > - Based on "type compatibility".
>> > > - Is turned off by default in the kernel by using
>> > > a compiler flag.
>> > >
>> > > C "restrict":
>> > > - Is a keyword, applied to pointers.
>> > > - Is opt-in to a kind of aliasing.
>> > > - Is seldom used in practice, since many find
>> > > it difficult to use correctly and avoid
>> > > undefined behavior.
>> > >
>> > > Rust aliasing:
>> > > - Is not a keyword.
>> > > - Applies to certain pointer kinds in Rust, namely
>> > > Rust "references".
>> > > Rust pointer kinds:
>> > > https://doc.rust-lang.org/reference/types/pointer.html
>> > > - Aliasing in Rust is not opt-in or opt-out,
>> > > it is always on.
>> > > https://doc.rust-lang.org/nomicon/aliasing.html
>> > > - Rust has not defined its aliasing model.
>> > > https://doc.rust-lang.org/nomicon/references.html
>> > > "Unfortunately, Rust hasn't actually
>> > > defined its aliasing model.
>> > > While we wait for the Rust devs to specify
>> > > the semantics of their language, let's use
>> > > the next section to discuss what aliasing is
>> > > in general, and why it matters."
>> > > There is active experimental research on
>> > > defining the aliasing model, including tree borrows
>> > > and stacked borrows.
>> > > - The aliasing model not being defined makes
>> > > it harder to reason about and work with
>> > > unsafe Rust, and therefore harder to avoid
>> > > undefined behavior/memory safety bugs.
>> >
>> > I think all of this worrying about Rust not having defined its
>> > aliasing model is way overblown. Ultimately, the status quo is that
>> > each unsafe operation that has to do with aliasing falls into one of
>> > three categories:
>> >
>> > * This is definitely allowed.
>> > * This is definitely UB.
>> > * We don't know whether we want to allow this yet.
>> >
>> > The full aliasing model that they want would eliminate the third
>> > category. But for practical purposes you just stay within the first
>> > subset and you will be happy.
>> >
>> > Alice
>>
>> Is there a specification for aliasing that defines your first bullet
>> point, that people can read and use, as a kind of partial
>> specification? Or maybe a subset of your first bullet point, as a
>> conservative partial specification? I am guessing that stacked
>> borrows or tree borrows might be useful for such a purpose.
>> But I do not know whether either of stacked borrows or tree
>> borrows have only false positives, only false negatives, or both.
>
>In general I would say read the standard library docs. But I don't
>know of a single resource with everything in one place.
>
>Stacked borrows and tree borrows are attempts at creating a full model
>that puts everything in the two first categories. They are not
>conservative partial specifications.
>
>> For Rust documentation, I have heard of the official
>> documentation websites at
>>
>> https://doc.rust-lang.org/book/
>> https://doc.rust-lang.org/nomicon/
>>
>> And various blogs, forums and research papers.
>>
>> If there is no such conservative partial specification for
>> aliasing yet, I wonder if such a conservative partial
>> specification could be made with relative ease, especially if
>> it is very conservative, at least in its first draft. Though there
>> is currently no specification of the Rust language and just
>> one major compiler.
>>
>> I know that Java defines an additional conservative reasoning
>> model for its memory model that is easier to reason about
>> than the full memory model, namely happens-before
>> relationship. That conservative reasoning model is taught in
>> official Java documentation and in books.
>
>On the topic of conservative partial specifications, I like the blog
>post "Tower of weakenings" from back when the strict provenance APIs
>were started, which I will share together with a quote from it:
>
>> Instead, we should have a tower of Memory Models, with the ones at the top being “what users should think about and try to write their code against”. As you descend the tower, the memory models become increasingly complex or vague but critically always more permissive than the ones above it. At the bottom of the tower is “whatever the compiler actually does” (and arguably “whatever the hardware actually does” in the basement, if you care about that).
>> https://faultlore.com/blah/tower-of-weakenings/
>
>You can also read the docs for the ptr module:
>https://doc.rust-lang.org/stable/std/ptr/index.html
>
>> On the topic of difficulty, even if there was a full specification,
>> it might still be difficult to work with aliasing in unsafe Rust.
>> For C "restrict", I assume that "restrict" is fully specified, and
>> C developers still typically avoid "restrict". And for unsafe
>> Rust, the Rust community helpfully encourages people to
>> avoid unsafe Rust when possible due to its difficulty.
>
>This I will not object to :)
>
>Alice
>
>
I do have to say one thing about the standards process: it forces a real specification to be written, as in a proper interface contract, including the corner cases (which of course may be "undefined", but the idea is that even what is out of scope is clear.)
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 16:12 ` Alice Ryhl
2025-02-25 17:21 ` Ventura Jack
@ 2025-02-25 18:54 ` Linus Torvalds
2025-02-25 19:47 ` Kent Overstreet
2025-02-26 13:54 ` Ralf Jung
1 sibling, 2 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-25 18:54 UTC (permalink / raw)
To: Alice Ryhl
Cc: Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Tue, 25 Feb 2025 at 08:12, Alice Ryhl <aliceryhl@google.com> wrote:
>
> I think all of this worrying about Rust not having defined its
> aliasing model is way overblown. Ultimately, the status quo is that
> each unsafe operation that has to do with aliasing falls into one of
> three categories:
>
> * This is definitely allowed.
> * This is definitely UB.
> * We don't know whether we want to allow this yet.
Side note: can I please ask that the Rust people avoid the "UD" model
as much as humanly possible?
In particular, if there is something that is undefined behavior - even
if it's in some "unsafe" mode, please please please make the rule be
that
(a) either the compiler ends up being constrained to doing things in
some "naive" code generation
or it's a clear UB situation, and
(b) the compiler will warn about it
IOW, *please* avoid the C model of "Oh, I'll generate code that
silently takes advantage of the fact that if I'm wrong, this case is
undefined".
And BTW, I think this is _particularly_ true for unsafe rust. Yes,
it's "unsafe", but at the same time, the unsafe parts are the fragile
parts and hopefully not _so_ hugely performance-critical that you need
to do wild optimizations.
So the cases I'm talking about is literally re-ordering accesses past
each other ("Hey, I don't know if these alias or not, but based on
some paper standard - rather than the source code - I will assume they
do not"), and things like integer overflow behavior ("Oh, maybe this
overflows and gives a different answer than the naive case that the
source code implies, but overflow is undefined so I can screw it up").
I'd just like to point to one case where the C standards body seems to
have actually at least consider improving on undefined behavior (so
credit where credit is due, since I often complain about the C
standards body):
https://www9.open-std.org/JTC1/SC22/WG14/www/docs/n3203.htm
where the original "this is undefined" came from the fact that
compilers were simple and restricting things like evaluation order
caused lots of problems. These days, a weak ordering definition causes
*many* more problems, and compilers are much smarter, and just saying
that the code has to act as if there was a strict ordering of
operations still allows almost all the normal optimizations in
practice.
This is just a general "please avoid the idiocies of the past". The
potential code generation improvements are not worth the pain.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 18:54 ` Linus Torvalds
@ 2025-02-25 19:47 ` Kent Overstreet
2025-02-25 20:25 ` Linus Torvalds
2025-02-25 22:42 ` Miguel Ojeda
2025-02-26 13:54 ` Ralf Jung
1 sibling, 2 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-25 19:47 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alice Ryhl, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Tue, Feb 25, 2025 at 10:54:46AM -0800, Linus Torvalds wrote:
> On Tue, 25 Feb 2025 at 08:12, Alice Ryhl <aliceryhl@google.com> wrote:
> >
> > I think all of this worrying about Rust not having defined its
> > aliasing model is way overblown. Ultimately, the status quo is that
> > each unsafe operation that has to do with aliasing falls into one of
> > three categories:
> >
> > * This is definitely allowed.
> > * This is definitely UB.
> > * We don't know whether we want to allow this yet.
>
> Side note: can I please ask that the Rust people avoid the "UD" model
> as much as humanly possible?
>
> In particular, if there is something that is undefined behavior - even
> if it's in some "unsafe" mode, please please please make the rule be
> that
>
> (a) either the compiler ends up being constrained to doing things in
> some "naive" code generation
>
> or it's a clear UB situation, and
>
> (b) the compiler will warn about it
>
> IOW, *please* avoid the C model of "Oh, I'll generate code that
> silently takes advantage of the fact that if I'm wrong, this case is
> undefined".
>
> And BTW, I think this is _particularly_ true for unsafe rust. Yes,
> it's "unsafe", but at the same time, the unsafe parts are the fragile
> parts and hopefully not _so_ hugely performance-critical that you need
> to do wild optimizations.
Well, the whole point of unsafe is for the parts where the compiler
can't in general check for UB, so there's no avoiding that.
And since unsafe is required for a lot of low level data structures (vec
and lists), even though the amount of code (in LOC) that uses unsafe
should be tiny, underneath everything it's all over the place so if it
disabled aliasing optimizations that actually would have a very real
impact on performance.
HOWEVER - the Rust folks don't have the same mindset as the C folks, so
I believe (not the expert here, Rust folks please elaborate..) in
practice a lot of things that would generate UB will be able to be
caught by the compiler. It won't be like -fstrict-aliasing in C, which
was an absolute shitshow.
(There was a real lack of communication between the compiler people and
everything else when that went down, trying to foist -fstrict-aliasing
without even an escape hatch defined at the time should've been a
shooting offence).
OTOH, the stacked borrows and tree borrows work is very much rooted in
"can we define a model that works for actual code", and Rust already has
the clearly defined escape hatches/demarcation points (e.g. UnsafeCell).
> So the cases I'm talking about is literally re-ordering accesses past
> each other ("Hey, I don't know if these alias or not, but based on
> some paper standard - rather than the source code - I will assume they
> do not"),
Yep, this is treeborrows. That gives us a model of "this reference
relates to this reference" so it's finally possible to do these
optimizations without handwavy bs (restrict...).
I think the one thing that's missing w.r.t. aliasing that Rust could
maybe use is a kasan-style sanitizer, I think with treeborrows and "now
we have an actual model for aliasing optimizations" it should be possible
to write such a sanitizer. But the amount of code doing complicated
enough stuff with unsafe should really be quite small, so - shouldn't be
urgently needed. Most unsafe will be in boring FFI stuff, and there all
aliasing optimizations get turned off at the C boundary.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 18:16 ` H. Peter Anvin
@ 2025-02-25 20:21 ` Kent Overstreet
2025-02-25 20:37 ` H. Peter Anvin
2025-02-26 13:03 ` Ventura Jack
0 siblings, 2 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-25 20:21 UTC (permalink / raw)
To: H. Peter Anvin
Cc: Alice Ryhl, Ventura Jack, Linus Torvalds, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On Tue, Feb 25, 2025 at 10:16:17AM -0800, H. Peter Anvin wrote:
> On February 25, 2025 9:36:07 AM PST, Alice Ryhl <aliceryhl@google.com> wrote:
> >On Tue, Feb 25, 2025 at 6:21 PM Ventura Jack <venturajack85@gmail.com> wrote:
> >> On the topic of difficulty, even if there was a full specification,
> >> it might still be difficult to work with aliasing in unsafe Rust.
> >> For C "restrict", I assume that "restrict" is fully specified, and
> >> C developers still typically avoid "restrict". And for unsafe
> >> Rust, the Rust community helpfully encourages people to
> >> avoid unsafe Rust when possible due to its difficulty.
> >
> >This I will not object to :)
> >
> >Alice
> >
> >
>
> I do have to say one thing about the standards process: it forces a
> real specification to be written, as in a proper interface contract,
> including the corner cases (which of course may be "undefined", but
> the idea is that even what is out of scope is clear.)
Did it, though?
The C standard didn't really define undefined behaviour in a
particularly useful way, and the compiler folks have always used it as a
shield to hide behind - "look! the standard says we can!", even though
that standard hasn't meaninfully changed it decades. It ossified things.
Whereas the Rust process seems to me to be more defined by actual
conversations with users and a focus on practicality and steady
improvement towards meaningful goals - i.e. concrete specifications.
There's been a lot of work towards those.
You don't need a standards body to have specifications.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 19:47 ` Kent Overstreet
@ 2025-02-25 20:25 ` Linus Torvalds
2025-02-25 20:55 ` Kent Overstreet
2025-02-25 22:45 ` Miguel Ojeda
2025-02-25 22:42 ` Miguel Ojeda
1 sibling, 2 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-25 20:25 UTC (permalink / raw)
To: Kent Overstreet
Cc: Alice Ryhl, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Tue, 25 Feb 2025 at 11:48, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>
> Well, the whole point of unsafe is for the parts where the compiler
> can't in general check for UB, so there's no avoiding that.
No, that's most definitely NOT the whole point of unsafe.
The point of unsafe is to bypass some rules, and write *SOURCE CODE*
that does intentionally questionable things.
The point of unsafe is *not* for the compiler to take source code that
questionable things, and then "optimize" it to do SOMETHING COMPLETELY
DIFFERENT.
Really. Anybody who thinks those two things are the same thing is
completely out to lunch. Kent, your argument is *garbage*.
Let me make a very clear example.
In unsafe rust code, you very much want to bypass limit checking,
because you might be implementing a memory allocator.
So if you are implementing the equivalent of malloc/free in unsafe
rust, you want to be able to do things like arbitrary pointer
arithmetic, because you are going to do very special things with the
heap layout, like hiding your allocation metadata based on the
allocation pointer, and then you want to do all the very crazy random
arithmetic on pointers that very much do *not* make sense in safe
code.
So unsafe rust is supposed to let the source code bypass those normal
"this is what you can do to a pointer" rules, and create random new
pointers that you then access.
But when you then access those pointers, unsafe Rust should *NOT* say
"oh, I'm now going to change the order of your accesses, because I
have decided - based on rules that have nothing to do with your source
code, and because you told me to go unsafe - that your unsafe pointer
A cannot alias with your unsafe pointer B".
See the difference between those two cases? In one case, the
*programmer* is doing something unsafe. And in the other, the
*compiler* is doing something unsafe.
One is intentional - and if the programmer screwed up, it's on the
programmer that did something wrong when he or she told the compiler
to not double-check him.
The other is a mistake. The same way the shit C aliasing rules (I
refuse to call them "strict", they are anything but) are a mistake.
So please: if a compiler cannot *prove* that things don't alias, don't
make up ad-hoc rules for "I'm going to assume these don't alias".
Just don't.
And no, "but it's unsafe" is *NOT* an excuse. Quite the opposite. When
you are in *safe* mode, you can assume that your language rules are
being followed, because safe code gets enforced.
In unsafe mode, the compiler should always just basically assume "I
don't understand what is going on, so I'm not going to _think_ I
understand what is going on".
Because *THAT* is the point of unsafe. The point of unsafe mode is
literally "the compiler doesn't understand what is going on".
The point is absolutely not for the compiler to then go all Spinal Tap
on the programmer, and turn up the unsafeness to 11.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 20:21 ` Kent Overstreet
@ 2025-02-25 20:37 ` H. Peter Anvin
2025-02-26 13:03 ` Ventura Jack
1 sibling, 0 replies; 194+ messages in thread
From: H. Peter Anvin @ 2025-02-25 20:37 UTC (permalink / raw)
To: Kent Overstreet
Cc: Alice Ryhl, Ventura Jack, Linus Torvalds, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On February 25, 2025 12:21:06 PM PST, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>On Tue, Feb 25, 2025 at 10:16:17AM -0800, H. Peter Anvin wrote:
>> On February 25, 2025 9:36:07 AM PST, Alice Ryhl <aliceryhl@google.com> wrote:
>> >On Tue, Feb 25, 2025 at 6:21 PM Ventura Jack <venturajack85@gmail.com> wrote:
>> >> On the topic of difficulty, even if there was a full specification,
>> >> it might still be difficult to work with aliasing in unsafe Rust.
>> >> For C "restrict", I assume that "restrict" is fully specified, and
>> >> C developers still typically avoid "restrict". And for unsafe
>> >> Rust, the Rust community helpfully encourages people to
>> >> avoid unsafe Rust when possible due to its difficulty.
>> >
>> >This I will not object to :)
>> >
>> >Alice
>> >
>> >
>>
>> I do have to say one thing about the standards process: it forces a
>> real specification to be written, as in a proper interface contract,
>> including the corner cases (which of course may be "undefined", but
>> the idea is that even what is out of scope is clear.)
>
>Did it, though?
>
>The C standard didn't really define undefined behaviour in a
>particularly useful way, and the compiler folks have always used it as a
>shield to hide behind - "look! the standard says we can!", even though
>that standard hasn't meaninfully changed it decades. It ossified things.
>
>Whereas the Rust process seems to me to be more defined by actual
>conversations with users and a focus on practicality and steady
>improvement towards meaningful goals - i.e. concrete specifications.
>There's been a lot of work towards those.
>
>You don't need a standards body to have specifications.
Whether a spec is "useful" is different from "ill defined."
I know where they came from – wanting to compete with Fortran 77 for HPC, being a very vocal community in the compiler area. F77 had very few ways to have aliasing at all, so it happened to make a lot of things like autovectorization relatively easy. Since vectorization inherently relies on hoisting loads above stores this really matters in that context.
Was C the right place to do it? That's a whole different question.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 20:25 ` Linus Torvalds
@ 2025-02-25 20:55 ` Kent Overstreet
2025-02-25 21:24 ` Linus Torvalds
2025-02-25 22:45 ` Miguel Ojeda
1 sibling, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-25 20:55 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alice Ryhl, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Tue, Feb 25, 2025 at 12:25:13PM -0800, Linus Torvalds wrote:
> On Tue, 25 Feb 2025 at 11:48, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> >
> > Well, the whole point of unsafe is for the parts where the compiler
> > can't in general check for UB, so there's no avoiding that.
>
> No, that's most definitely NOT the whole point of unsafe.
>
> The point of unsafe is to bypass some rules, and write *SOURCE CODE*
> that does intentionally questionable things.
"Intentionally questionable"?
No, no, no.
That's not a term that has any meaning here; code is either correct or
it's not. We use unsafe when we need to do things that can't be
expressed in the model the compiler checks against - i.e. the model
where we can prove for all inputs that UB is impossible.
That does _not_ mean that there is no specification for what is and
isn't allowed: it just means that there is no way to check for all
inputs, _at compile time_, whether code obeys the spec.
> So if you are implementing the equivalent of malloc/free in unsafe
> rust, you want to be able to do things like arbitrary pointer
> arithmetic, because you are going to do very special things with the
> heap layout, like hiding your allocation metadata based on the
> allocation pointer, and then you want to do all the very crazy random
> arithmetic on pointers that very much do *not* make sense in safe
> code.
Yes, and the borrow checker has to go out the window.
> So unsafe rust is supposed to let the source code bypass those normal
> "this is what you can do to a pointer" rules, and create random new
> pointers that you then access.
>
> But when you then access those pointers, unsafe Rust should *NOT* say
> "oh, I'm now going to change the order of your accesses, because I
> have decided - based on rules that have nothing to do with your source
> code, and because you told me to go unsafe - that your unsafe pointer
> A cannot alias with your unsafe pointer B".
Well, not without sane rules everyone can follow, which _we never had in
C_.
In C, there's simply no model for derived pointers - this is why e.g.
restrict is just laughable. Because it's never just one pointer that
doesn't alias, we're always doing pointer arithmetic and computing new
pointers, so you need to be able to talk about _which_ pointers can't
alias.
This is the work we've been talking about with stacked/tree borrows: now
we do have that model. We can do pointer arithmetic, compute a new
pointer from a previous pointer (e.g. to get to the malloc header), and
yes of _course_ that aliases with the previous pointer - and the
compiler can understand that, and there are rules (that compiler can
even check, I believe) for "I'm doing writes through mutable derived
pointer A', I can't do any through A while A' exist".
See?
The problem isn't that "pointer aliasing is fundamentally unsafe and
dangerous and therefore the compiler just has to stay away from it
completely" - the problem has just been the lack of a workable model.
Much like how we went from "multithreaded programming is crazy and
dangerous", to "memory barriers are something you're just expected to
know how to use correctly".
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 20:55 ` Kent Overstreet
@ 2025-02-25 21:24 ` Linus Torvalds
2025-02-25 23:34 ` Kent Overstreet
0 siblings, 1 reply; 194+ messages in thread
From: Linus Torvalds @ 2025-02-25 21:24 UTC (permalink / raw)
To: Kent Overstreet
Cc: Alice Ryhl, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Tue, 25 Feb 2025 at 12:55, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>
> The problem isn't that "pointer aliasing is fundamentally unsafe and
> dangerous and therefore the compiler just has to stay away from it
> completely" - the problem has just been the lack of a workable model.
It's not entirely clear that a workable aliasing model exists outside
of "don't assume lack of aliasing".
Because THAT is the only truly workable model I know of. It's the one
we use in the kernel, and it works just fine.
For anything else, we only have clear indications that _unworkable_
models exist.
We know type aliasing is garbage.
We know "restrict" doesn't work very well: part of that is that it's
fairly cumbersome to use, but a large part of that is that a pointer
will be restricted in one context and not another, and it's just
confusing and hard to get right.
That, btw, tends to just generally indicate that any model where you
expect the programmer to tell you the aliasing rule is likely to be
unworkable. Not because it might not be workable from a *compiler*
standpoint (restrict certainly works on that level), but because it's
simply not a realistic model for most programmers.
What we do know works is hard rules based on provenance. All compilers
will happily do sane alias analysis based on "this is a variable that
I created, I know it cannot alias with anything else, because I didn't
expose the address to anything else".
I argued a few years ago that while "restrict" doesn't work in C, what
would have often worked is to instead try to attribute things with
their provenance. People already mark allocator functions, so that
compilers can see "oh, that's a new allocation, I haven't exposed the
result to anything yet, so I know it can't be aliasing anything else
in this context". That was a very natural extension from what C
compilers already do with local on-stack allocations etc.
So *provenance*-based aliasing works, but it only works in contexts
where you can see the provenance. Having some way to express
provenance across functions (and not *just* at allocation time) might
be a good model.
But in the absence of knowledge, and in the absence of
compiler-imposed rules (and "unsafe" is by *definition* that absence),
I think the only rule that works is "don't assume they don't alias".
Some things are simply undecidable. People should accept that. It's
obviously true in a theoretical setting (CS calls it "halting
problem", the rest of the academic world calls it "Gödel's
incompleteness theorem").
But it is even *MORE* true in practice, and I think your "the problem
has just been the lack of a workable model" is naive. It implies there
must be a solution to aliasing issues. And I claim that there is no
"must" there.
Just accept that things alias, and that you might sometimes get
slightly worse code generation. Nobody cares. Have you *looked* at the
kind of code that gets productized?
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 23:04 ` Ventura Jack
@ 2025-02-25 22:38 ` Benno Lossin
2025-02-25 22:47 ` Miguel Ojeda
0 siblings, 1 reply; 194+ messages in thread
From: Benno Lossin @ 2025-02-25 22:38 UTC (permalink / raw)
To: Ventura Jack
Cc: Gary Guo, Linus Torvalds, Kent Overstreet, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On 25.02.25 00:04, Ventura Jack wrote:
> On Mon, Feb 24, 2025 at 3:03 PM Benno Lossin <benno.lossin@proton.me> wrote:
>>
>> On 24.02.25 17:57, Ventura Jack wrote:
>>> One example I tested against MIRI:
>>>
>>> use std::cell::UnsafeCell;
>>>
>>> fn main() {
>>>
>>> let val: UnsafeCell<i32> = UnsafeCell::new(42);
>>> let x: & UnsafeCell<i32> = &val;
>>> let y: & UnsafeCell<i32> = &val;
>>>
>>> unsafe {
>>>
>>> // UB.
>>> //let pz: & i32 = & *val.get();
>>>
>>> // UB.
>>> //let pz: &mut i32 = &mut *val.get();
>>>
>>> // Okay.
>>> //let pz: *const i32 = &raw const *val.get();
>>>
>>> // Okay.
>>> let pz: *mut i32 = &raw mut *val.get();
>>>
>>> let px: *mut i32 = x.get();
>>> let py: *mut i32 = y.get();
>>>
>>> *px = 0;
>>> *py += 42;
>>> *px += 24;
>>>
>>> println!("x, y, z: {}, {}, {}", *px, *py, *pz);
>>> }
>>> }
>>>
>>> It makes sense that the Rust "raw pointers" `*const i32` and `*mut
>>> i32` are fine here, and that taking Rust "references" `& i32` and
>>> `&mut i32` causes UB, since Rust "references" have aliasing rules that
>>> must be followed.
>>
>> So it depends on what exactly you do, since if you just uncomment one of
>> the "UB" lines, the variable never gets used and thus no actual UB
>> happens. But if you were to do this:
>
> I did actually test it against MIRI with only one line commented in at
> a time, and the UB lines did give UB according to MIRI, I did not
> explain that.
I do not get UB when I comment out any of the commented lines. Can you
share the output of MIRI?
---
Cheers,
Benno
> It feels a lot like juggling with very sharp knives, but
> I already knew that, because the Rust community generally does a great
> job of warning people against unsafe. MIRI is very good, but it cannot
> catch everything, so it cannot be relied upon in general. And MIRI
> shares some of the advantages and disadvantages of sanitizers for C.
>
> Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 19:47 ` Kent Overstreet
2025-02-25 20:25 ` Linus Torvalds
@ 2025-02-25 22:42 ` Miguel Ojeda
2025-02-26 14:01 ` Ralf Jung
1 sibling, 1 reply; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-25 22:42 UTC (permalink / raw)
To: Kent Overstreet
Cc: Linus Torvalds, Alice Ryhl, Ventura Jack, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Tue, Feb 25, 2025 at 8:48 PM Kent Overstreet
<kent.overstreet@linux.dev> wrote:
>
> I think the one thing that's missing w.r.t. aliasing that Rust could
> maybe use is a kasan-style sanitizer, I think with treeborrows and "now
> we have an actual model for aliasing optimizations" it should be possible
> to write such a sanitizer. But the amount of code doing complicated
> enough stuff with unsafe should really be quite small, so - shouldn't be
Miri implements those models and can check code for conformance. It
can be used easily in the Rust playground (top-right corner -> Tools
-> Miri):
https://play.rust-lang.org
However, it does not work when you involved C FFI, though, but you can
play there. For more advanced usage, e.g. testing a particular model
like Tree Borrows, I think you need to use it locally, since I am not
sure if flags can be passed yet.
I would like to get it, plus other tools, into Compiler Explorer, see
e.g. https://github.com/compiler-explorer/compiler-explorer/issues/2563.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 20:25 ` Linus Torvalds
2025-02-25 20:55 ` Kent Overstreet
@ 2025-02-25 22:45 ` Miguel Ojeda
2025-02-26 0:05 ` Miguel Ojeda
1 sibling, 1 reply; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-25 22:45 UTC (permalink / raw)
To: Linus Torvalds
Cc: Kent Overstreet, Alice Ryhl, Ventura Jack, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux, Ralf Jung
On Tue, Feb 25, 2025 at 9:25 PM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> No, that's most definitely NOT the whole point of unsafe.
There are a few viewpoints here, which can be understood as correct in
different senses.
It is true that unsafe Rust is supposed to be used when you cannot
implement something in safe Rust (be it because the safe subset does
not support it or for performance reasons). In that sense, the point
of unsafe is indeed to expand on what you can implement.
It is also true that `unsafe` blocks in Rust are just a marker, and
that they don't change any particular rule -- they "only" enable a few
more operations (i.e the only "rule" they change is that you can call
those operations). Of course, with those extra operations one can then
implement things that normally one would not be able to.
So, for instance, the aliasing rules apply the same way within
`unsafe` blocks or outside them, and Rust currently passes LLVM the
information which does get used to optimize accordingly. In fact, Rust
generally passes so much aliasing information that it surfaced LLVM
bugs in the past that had to be fixed, since nobody else was
attempting that.
Now, the thing is that one can use pointer types that do not have
aliasing requirements, like raw pointers, especially when dealing with
`unsafe` things. And then one can wrap that into a nice API that
exposes safe (and unsafe) operations itself, e.g. an implementation of
`Vec` internally may use raw pointers, but expose a safe API.
As an example:
fn f(p: &mut i32, q: &mut i32) -> i32 {
*p = 42;
*q = 24;
*p
}
optimizes exactly the same way as:
fn f(p: &mut i32, q: &mut i32) -> i32 {
unsafe {
*p = 42;
*q = 24;
*p
}
}
Both of them are essentially `restrict`/`noalias`, and thus no load is
performed, with a constant 42 returned.
However, the following performs a load, because it uses raw pointers instead:
fn f(p: *mut i32, q: *mut i32) -> i32 {
unsafe {
*p = 42;
*q = 24;
*p
}
}
The version with raw pointers without `unsafe` does not compile,
because dereferencing raw pointers is one of those things that unsafe
Rust unblocks.
One can also define types for which `&mut T` will behave like a raw
point here, too. That is one of the things we do when we wrap C
structs that the C side has access to.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 22:38 ` Benno Lossin
@ 2025-02-25 22:47 ` Miguel Ojeda
2025-02-25 23:03 ` Benno Lossin
0 siblings, 1 reply; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-25 22:47 UTC (permalink / raw)
To: Benno Lossin
Cc: Ventura Jack, Gary Guo, Linus Torvalds, Kent Overstreet, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Tue, Feb 25, 2025 at 11:38 PM Benno Lossin <benno.lossin@proton.me> wrote:
>
> I do not get UB when I comment out any of the commented lines. Can you
> share the output of MIRI?
I think he means when only having one of the `pz`s definitions out of
the 4, i.e. uncommenting the first and commenting the last one that is
live in the example.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 22:47 ` Miguel Ojeda
@ 2025-02-25 23:03 ` Benno Lossin
0 siblings, 0 replies; 194+ messages in thread
From: Benno Lossin @ 2025-02-25 23:03 UTC (permalink / raw)
To: Miguel Ojeda
Cc: Ventura Jack, Gary Guo, Linus Torvalds, Kent Overstreet, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On 25.02.25 23:47, Miguel Ojeda wrote:
> On Tue, Feb 25, 2025 at 11:38 PM Benno Lossin <benno.lossin@proton.me> wrote:
>>
>> I do not get UB when I comment out any of the commented lines. Can you
>> share the output of MIRI?
>
> I think he means when only having one of the `pz`s definitions out of
> the 4, i.e. uncommenting the first and commenting the last one that is
> live in the example.
Ah of course :facepalm:, thanks for clarifying :)
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 21:24 ` Linus Torvalds
@ 2025-02-25 23:34 ` Kent Overstreet
2025-02-26 11:57 ` Gary Guo
2025-02-26 14:26 ` Ventura Jack
0 siblings, 2 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-25 23:34 UTC (permalink / raw)
To: Linus Torvalds
Cc: Alice Ryhl, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Tue, Feb 25, 2025 at 01:24:42PM -0800, Linus Torvalds wrote:
> On Tue, 25 Feb 2025 at 12:55, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> >
> > The problem isn't that "pointer aliasing is fundamentally unsafe and
> > dangerous and therefore the compiler just has to stay away from it
> > completely" - the problem has just been the lack of a workable model.
>
> It's not entirely clear that a workable aliasing model exists outside
> of "don't assume lack of aliasing".
>
> Because THAT is the only truly workable model I know of. It's the one
> we use in the kernel, and it works just fine.
>
> For anything else, we only have clear indications that _unworkable_
> models exist.
>
> We know type aliasing is garbage.
The C people thinking casting to a union was a workable escape hatch was
hilarious, heh. But now we've got mem::transmute(), i.e. that can (and
must be) annotated to the compiler.
> We know "restrict" doesn't work very well: part of that is that it's
> fairly cumbersome to use, but a large part of that is that a pointer
> will be restricted in one context and not another, and it's just
> confusing and hard to get right.
And it only works at all in the simplest of contexts...
> What we do know works is hard rules based on provenance. All compilers
> will happily do sane alias analysis based on "this is a variable that
> I created, I know it cannot alias with anything else, because I didn't
> expose the address to anything else".
Yep. That's what all this is based on.
> So *provenance*-based aliasing works, but it only works in contexts
> where you can see the provenance. Having some way to express
> provenance across functions (and not *just* at allocation time) might
> be a good model.
We have that! That's exactly what lifetime annotations are.
We don't have that for raw pointers, but I'm not sure that would ever be
needed since you use raw pointers in small and localized places, and a
lot of the places where aliasing comes up in C (e.g. memmove()) you
express differently in Rust, with slices and indices.
(You want to drop from references to raw pointers at the last possible
moment).
And besides, a lot of the places where aliasing comes up in C are
already gone in Rust, there's a lot of little things that help.
Algebraic data types are a big one, since a lot of the sketchy hackery
that goes on in C where aliasing is problematic is just working around
the lack of ADTs.
> But in the absence of knowledge, and in the absence of
> compiler-imposed rules (and "unsafe" is by *definition* that absence),
> I think the only rule that works is "don't assume they don't alias".
Well, for the vast body of Rust code that's been written that just
doesn't seem to be the case, and I think it's been pretty well
demonstrated that anything we can do in C, we can also do just as
effectively in Rust.
treeborrow is already merged into Miri - this stuff is pretty far along.
Now if you're imagining directly translating all the old grotty C code I
know you have in your head - yeah, that won't work. But we already knew
that.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 22:45 ` Miguel Ojeda
@ 2025-02-26 0:05 ` Miguel Ojeda
0 siblings, 0 replies; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-26 0:05 UTC (permalink / raw)
To: Linus Torvalds
Cc: Kent Overstreet, Alice Ryhl, Ventura Jack, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux, Ralf Jung
On Tue, Feb 25, 2025 at 11:45 PM Miguel Ojeda
<miguel.ojeda.sandonis@gmail.com> wrote:
>
> Both of them are essentially `restrict`/`noalias`, and thus no load is
> performed, with a constant 42 returned.
I forgot to mention that while having so many `restrict`s around
sounds crazy, the reason why this can even remotely work in practice
without everything blowing up all the time is because, unlike
`restrict` in C, Rust will not allow one to e.g. call
f(&mut a, &mut a)
Complaining with:
error[E0499]: cannot borrow `a` as mutable more than once at a time
--> <source>:10:19
|
10 | f(&mut a, &mut a);
| - ------ ^^^^^^ second mutable borrow occurs here
| | |
| | first mutable borrow occurs here
| first borrow later used by call
Even then, when one is around unsafe code, one needs to be very
careful not to introduce UB by e.g. fabricating `&mut`s that actually
alias by mistake, because of course then it all breaks.
And the hard part is designing APIs (like the mentioned `Vec`) that
use unsafe code in the implementation but are able to promise to be
safe without allowing any possible caller to break the castle down
("soundness").
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-22 20:49 ` Kent Overstreet
@ 2025-02-26 11:34 ` Ralf Jung
2025-02-26 14:57 ` Ventura Jack
0 siblings, 1 reply; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 11:34 UTC (permalink / raw)
To: Kent Overstreet, Miguel Ojeda
Cc: Ventura Jack, Gary Guo, torvalds, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
rust-for-linux
Hi all,
(For context, I am the supervisor of the Tree Borrows project and the main
author of its predecessor, Stacked Borrows. I am also maintaining Miri, a Rust
UB detection tool that was mentioned elsewhere in this thread. I am happy to
answer any questions you might have about any of these projects. :)
>> Not sure what I said, but Cc'ing Ralf in case he has time and wants to
>> share something on this (thanks in advance!).
>
> Yeah, this looks like just the thing. At the conference you were talking
> more about memory provenance in C, if memory serves there was cross
> pollination going on between the C and Rust folks - did anything come of
> the C side?
On the C side, there is a provenance model called pnvi-ae-udi (yeah the name is
terrible, it's a long story ;), which you can read more about at
<http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2676.pdf>. My understanding is
that it will not become part of the standard though; I don't understand the
politics of WG14 well enough to say what exactly its status is. However, my
understanding is that that model would require some changes to both clang and
gcc for them to be compliant (and likely most other C compilers that do any kind
of non-trivial alias analysis); I am not sure what the plans/timeline are for
making that happen.
The Rust aliasing model
(https://doc.rust-lang.org/nightly/std/ptr/index.html#strict-provenance) is
designed to not require changes to the backend, except for fixing things that
are clear bugs that also affect C code
(https://github.com/llvm/llvm-project/issues/33896,
https://github.com/llvm/llvm-project/issues/34577).
I should also emphasize that defining the basic treatment of provenance is a
necessary, but not sufficient, condition for defining an aliasing model.
>> From a quick look, Tree Borrows was submitted for publication back in November:
>>
>> https://jhostert.de/assets/pdf/papers/villani2024trees.pdf
>> https://perso.crans.org/vanille/treebor/
>
> That's it.
>
> This looks fantastic, much further along than the last time I looked.
> The only question I'm trying to answer is whether it's been pushed far
> enough into llvm for the optimization opportunities to be realized - I'd
> quite like to take a look at some generated code.
I'm glad you like it. :)
Rust has informed LLVM about some basic aliasing facts since ~forever, and LLVM
is using those opportunities all over Rust code. Specifically, Rust has set
"noalias" (the LLVM equivalent of C "restrict") on all function parameters that
are references (specifically mutable reference without pinning, and shared
references without interior mutability). Stacked Borrows and Tree Borrows kind
of retroactively are justifying this by clarifying the rules that are imposed on
unsafe Rust, such that if unsafe Rust follows those rules, they also follow
LLVM's "noalias". Unfortunately, C "restrict" and LLVM "noalias" are not
specified very precisely, so we can only hope that this connection indeed holds.
Both Stacked Borrows and Tree Borrows go further than "noalias"; among other
differences, they impose aliasing requirements on references that stay within a
function. Most of those extra requirements are not yet used by the optimizer (it
is not clear how to inform LLVM about them, and Rust's own optimizer doesn't use
them either). Part of the reason for this is that without a precise model, it is
hard to be sure which optimizations are correct (in the sense that they do not
break correct unsafe code) -- and both Stacked Borrows and Tree Borrows are
still experiments, nothing has been officially decided yet.
Let me also reply to some statements made further up-thread by Ventura Jack (in
<https://lore.kernel.org/rust-for-linux/CAFJgqgSqMO724SQxinNqVGCGc7=ibUvVq-f7Qk1=S3A47Mr-ZQ@mail.gmail.com/>):
> - Aliasing in Rust is not opt-in or opt-out,
> it is always on.
> https://doc.rust-lang.org/nomicon/aliasing.html
This is true, but only for references. There are no aliasing requirements on raw
pointers. There *are* aliasing requirements if you mix references and raw
pointers to the same location, so if you want to do arbitrary aliasing you have
to make sure you use only raw pointers, no references. So unlike in C, you have
a way to opt-out entirely within standard Rust.
The ergonomics of working with raw pointers could certainly be improved. The
experience of kernel developers using Rust could help inform that effort. :)
Though currently the main issue here is that there's nobody actively pushing for
this.
> - Rust has not defined its aliasing model.
Correct. But then, neither has C. The C aliasing rules are described in English
prose that is prone to ambiguities and misintepretation. The strict aliasing
analysis implemented in GCC is not compatible with how most people read the
standard (https://bugs.llvm.org/show_bug.cgi?id=21725). There is no tool to
check whether code follows the C aliasing rules, and due to the aforementioned
ambiguities it would be hard to write such a tool and be sure it interprets the
standard the same way compilers do.
For Rust, we at least have two candidate models that are defined in full
mathematical rigor, and a tool that is widely used in the community, ensuring
the models match realistic use of Rust.
> - The aliasing rules in Rust are possibly as hard or
> harder than for C "restrict", and it is not possible to
> opt out of aliasing in Rust, which is cited by some
> as one of the reasons for unsafe Rust being
> harder than C.
That is not quite correct; it is possible to opt-out by using raw pointers.
> the aliasing rules, may try to rely on MIRI. MIRI is
> similar to a sanitizer for C, with similar advantages and
> disadvantages. MIRI uses both the stacked borrow
> and the tree borrow experimental research models.
> MIRI, like sanitizers, does not catch everything, though
> MIRI has been used to find undefined behavior/memory
> safety bugs in for instance the Rust standard library.
Unlike sanitizers, Miri can actually catch everything. However, since the exact
details of what is and is not UB in Rust are still being worked out, we cannot
yet make in good conscience a promise saying "Miri catches all UB". However, as
the Miri README states:
"To the best of our knowledge, all Undefined Behavior that has the potential to
affect a program's correctness is being detected by Miri (modulo bugs), but you
should consult the Reference for the official definition of Undefined Behavior.
Miri will be updated with the Rust compiler to protect against UB as it is
understood by the current compiler, but it makes no promises about future
versions of rustc."
See the Miri README (https://github.com/rust-lang/miri/?tab=readme-ov-file#miri)
for further details and caveats regarding non-determinism.
So, the situation for Rust here is a lot better than it is in C. Unfortunately,
running kernel code in Miri is not currently possible; figuring out how to
improve that could be an interesting collaboration.
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-24 14:47 ` Miguel Ojeda
2025-02-24 14:54 ` Miguel Ojeda
@ 2025-02-26 11:38 ` Ralf Jung
1 sibling, 0 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 11:38 UTC (permalink / raw)
To: Miguel Ojeda, Theodore Ts'o
Cc: Ventura Jack, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
Hi all,
>> Hmm, I wonder if this is the reason of the persistent hostility that I
>> keep hearing about in the Rust community against alternate
>> implementations of the Rust compiler, such as the one being developed
>> using the GCC backend. *Since* the aliasing model hasn't been
>
> I guess you are referring to gccrs, i.e. the new GCC frontend
> developed within GCC (the other one, which is a backend,
> rustc_codegen_gcc, is part of the Rust project, so no hostility there
> I assume).
>
> In any case, yes, there are some people out there that may not agree
> with the benefits/costs of implementing a new frontend in, say, GCC.
> But that does not imply everyone is hostile. In fact, as far as I
> understand, both Rust and gccrs are working together, e.g. see this
> recent blog post:
>
> https://blog.rust-lang.org/2024/11/07/gccrs-an-alternative-compiler-for-rust.html
Indeed I want to push back hard against the claim that the Rust community as a
whole is "hostile" towards gcc-rs. There are a lot of people that do not share
the opinion that an independent implementation is needed, and there is some (IMO
justified) concern about the downsides of an independent implementation (mostly
concerning the risk of a language split / ecosystem fragmentation). However, the
gcc-rs folks have consistently stated that they are aware of this and intend
gcc-rs to be fully compatible with rustc by not providing any custom language
extensions / flags that could split the ecosystem, which has resolved all those
concerns at least as far as I am concerned. :)
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 23:34 ` Kent Overstreet
@ 2025-02-26 11:57 ` Gary Guo
2025-02-27 14:43 ` Ventura Jack
2025-02-26 14:26 ` Ventura Jack
1 sibling, 1 reply; 194+ messages in thread
From: Gary Guo @ 2025-02-26 11:57 UTC (permalink / raw)
To: Kent Overstreet
Cc: Linus Torvalds, Alice Ryhl, Ventura Jack, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Tue, 25 Feb 2025 18:34:42 -0500
Kent Overstreet <kent.overstreet@linux.dev> wrote:
> On Tue, Feb 25, 2025 at 01:24:42PM -0800, Linus Torvalds wrote:
> > On Tue, 25 Feb 2025 at 12:55, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> > >
> > > The problem isn't that "pointer aliasing is fundamentally unsafe and
> > > dangerous and therefore the compiler just has to stay away from it
> > > completely" - the problem has just been the lack of a workable model.
> >
> > It's not entirely clear that a workable aliasing model exists outside
> > of "don't assume lack of aliasing".
> >
> > Because THAT is the only truly workable model I know of. It's the one
> > we use in the kernel, and it works just fine.
> >
> > For anything else, we only have clear indications that _unworkable_
> > models exist.
> >
> > We know type aliasing is garbage.
>
> The C people thinking casting to a union was a workable escape hatch was
> hilarious, heh. But now we've got mem::transmute(), i.e. that can (and
> must be) annotated to the compiler.
Well, you can still use unions to transmute different types in Rust,
and in addition to that, transmuting through pointers is also
perfecting valid. These don't need special annotations.
There's simply no type aliasing in Rust. In fact, there's a whole
library called zerocopy that exactly give you a way to transmute
between different types safely without copying!
I can completely concur that type aliasing is garbage and I'm glad that
it doesn't exist in Rust.
> > We know "restrict" doesn't work very well: part of that is that it's
> > fairly cumbersome to use, but a large part of that is that a pointer
> > will be restricted in one context and not another, and it's just
> > confusing and hard to get right.
>
> And it only works at all in the simplest of contexts...
>
> > What we do know works is hard rules based on provenance. All compilers
> > will happily do sane alias analysis based on "this is a variable that
> > I created, I know it cannot alias with anything else, because I didn't
> > expose the address to anything else".
>
> Yep. That's what all this is based on.
Correct. In fact, Rust has already stabilized the strict provenance
APIs so that developers can more easily express there intention on how
their operations on pointers should affect provenance. I'd say this is
a big step forward compared to C.
>
> > So *provenance*-based aliasing works, but it only works in contexts
> > where you can see the provenance. Having some way to express
> > provenance across functions (and not *just* at allocation time) might
> > be a good model.
>
> We have that! That's exactly what lifetime annotations are.
>
> We don't have that for raw pointers, but I'm not sure that would ever be
> needed since you use raw pointers in small and localized places, and a
> lot of the places where aliasing comes up in C (e.g. memmove()) you
> express differently in Rust, with slices and indices.
On thing to note is that Rust aliasing rules are not tied to lifetime
annotations. The rule applies equally to safe and unsafe Rust code.
It's just that with lifetime annotations, it *prevents* you from
writing code that does not conform to the aliasing rules.
Raw pointers stil have provenances, and misusing them can cause you
trouble -- although a lot of "pitfalls" in C does not exist, e.g.
comparing two pointers are properly defined as
comparision-without-provenance in Rust.
>
> (You want to drop from references to raw pointers at the last possible
> moment).
>
> And besides, a lot of the places where aliasing comes up in C are
> already gone in Rust, there's a lot of little things that help.
> Algebraic data types are a big one, since a lot of the sketchy hackery
> that goes on in C where aliasing is problematic is just working around
> the lack of ADTs.
>
> > But in the absence of knowledge, and in the absence of
> > compiler-imposed rules (and "unsafe" is by *definition* that absence),
> > I think the only rule that works is "don't assume they don't alias".
>
> Well, for the vast body of Rust code that's been written that just
> doesn't seem to be the case, and I think it's been pretty well
> demonstrated that anything we can do in C, we can also do just as
> effectively in Rust.
>
> treeborrow is already merged into Miri - this stuff is pretty far along.
>
> Now if you're imagining directly translating all the old grotty C code I
> know you have in your head - yeah, that won't work. But we already knew
> that.
If you translate some random C code to all-unsafe Rust I think there's
a good chance that it's (pedantically) undefined C code but well
defined Rust code!
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 17:36 ` Alice Ryhl
2025-02-25 18:16 ` H. Peter Anvin
@ 2025-02-26 12:36 ` Ventura Jack
2025-02-26 13:52 ` Miguel Ojeda
2025-02-26 14:14 ` Ralf Jung
1 sibling, 2 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-26 12:36 UTC (permalink / raw)
To: Alice Ryhl
Cc: Linus Torvalds, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Tue, Feb 25, 2025 at 10:36 AM Alice Ryhl <aliceryhl@google.com> wrote:
>
> On Tue, Feb 25, 2025 at 6:21 PM Ventura Jack <venturajack85@gmail.com> wrote:
> > Is there a specification for aliasing that defines your first bullet
> > point, that people can read and use, as a kind of partial
> > specification? Or maybe a subset of your first bullet point, as a
> > conservative partial specification? I am guessing that stacked
> > borrows or tree borrows might be useful for such a purpose.
> > But I do not know whether either of stacked borrows or tree
> > borrows have only false positives, only false negatives, or both.
>
> In general I would say read the standard library docs. But I don't
> know of a single resource with everything in one place.
>
> Stacked borrows and tree borrows are attempts at creating a full model
> that puts everything in the two first categories. They are not
> conservative partial specifications.
Tree borrows is, as far as I can tell, the successor to stacked borrows.
https://perso.crans.org/vanille/treebor/
"Tree Borrows is a proposed alternative to Stacked Borrows that
fulfills the same role: to analyse the execution of Rust code at
runtime and define the precise requirements of the aliasing
constraints."
In a preprint paper, both stacked borrows and tree burrows are as
far as I can tell described as having false positives.
https://perso.crans.org/vanille/treebor/aux/preprint.pdf
"This overcomes the aforementioned limitations: our evaluation
on the 30 000 most widely used Rust crates shows that Tree
Borrows rejects 54% fewer test cases than Stacked Borrows does."
That paper also refers specifically to LLVM.
https://perso.crans.org/vanille/treebor/aux/preprint.pdf
"Tree Borrows (like Stacked Borrows) was designed with this in
mind, so that a Rust program that complies with the rules of Tree
Borrows should translate into an LLVM IR program that satisfies
all the assumptions implied by noalias."
Are you sure that both stacked borrows and tree borrows are
meant to be full models with no false positives and false negatives,
and no uncertainty, if I understand you correctly? It should be
noted that they are both works in progress.
MIRI is also used a lot like a sanitizer, and that means that MIRI
cannot in general ensure that a program has no undefined
behavior/memory safety bugs, only at most that a given test run
did not violate the model. So if the test runs do not cover all
possible runs, UB may still hide. MIRI is still very good, though,
as it has caught a lot of undefined behavior/memory safety bugs,
and potential bugs, in the Rust standard library and other Rust
code.
https://github.com/rust-lang/miri#bugs-found-by-miri
> > For Rust documentation, I have heard of the official
> > documentation websites at
> >
> > https://doc.rust-lang.org/book/
> > https://doc.rust-lang.org/nomicon/
> >
> > And various blogs, forums and research papers.
> >
> > If there is no such conservative partial specification for
> > aliasing yet, I wonder if such a conservative partial
> > specification could be made with relative ease, especially if
> > it is very conservative, at least in its first draft. Though there
> > is currently no specification of the Rust language and just
> > one major compiler.
> >
> > I know that Java defines an additional conservative reasoning
> > model for its memory model that is easier to reason about
> > than the full memory model, namely happens-before
> > relationship. That conservative reasoning model is taught in
> > official Java documentation and in books.
>
> On the topic of conservative partial specifications, I like the blog
> post "Tower of weakenings" from back when the strict provenance APIs
> were started, which I will share together with a quote from it:
>
> > Instead, we should have a tower of Memory Models, with the ones at the top being “what users should think about and try to write their code against”. As you descend the tower, the memory models become increasingly complex or vague but critically always more permissive than the ones above it. At the bottom of the tower is “whatever the compiler actually does” (and arguably “whatever the hardware actually does” in the basement, if you care about that).
> > https://faultlore.com/blah/tower-of-weakenings/
>
> You can also read the docs for the ptr module:
> https://doc.rust-lang.org/stable/std/ptr/index.html
That latter link refers through the undefined behavior page to.
https://doc.rust-lang.org/stable/reference/behavior-considered-undefined.html
http://llvm.org/docs/LangRef.html#pointer-aliasing-rules
The aliasing rules being tied to a specific compiler backend,
instead of a specification, might make it harder for other
Rust compilers, like gccrs, to implement the same behavior for
compiled programs, as what the sole major Rust compiler,
rustc, has of behavior for compiled programs.
> > On the topic of difficulty, even if there was a full specification,
> > it might still be difficult to work with aliasing in unsafe Rust.
> > For C "restrict", I assume that "restrict" is fully specified, and
> > C developers still typically avoid "restrict". And for unsafe
> > Rust, the Rust community helpfully encourages people to
> > avoid unsafe Rust when possible due to its difficulty.
>
> This I will not object to :)
>
> Alice
On the topic of difficulty and the aliasing rules not being
specified, some have claimed that the aliasing rules for
Rust not being fully specified makes unsafe Rust harder.
https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
"The aliasing rules in Rust are not fully defined. That’s
part of what makes this hard. You have to write code
assuming the most pessimal aliasing model."
"Note: This may have been a MIRI bug or the rules have
since been relaxed, because I can no longer reproduce
as of nightly-2024-06-12. Here’s where the memory
model and aliasing rules not being defined caused some
pain: when MIRI fails, it’s unclear whether it’s my fault or
not. For example, given the &mut was immediately
turned into a pointer, does the &mut reference still exist?
There are multiple valid interpretations of the rules."
I am also skeptical of the apparent reliance on MIRI in the
blog post and by some other Rust developers, since
MiRI according to its own documentation cannot catch
everything. It is better not to rely on a sanitizer for trying
to figure out the correctness of a program. Sanitizers are
useful for purposes like mitigation and debugging, not
necessarily for determining correctness.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 20:21 ` Kent Overstreet
2025-02-25 20:37 ` H. Peter Anvin
@ 2025-02-26 13:03 ` Ventura Jack
2025-02-26 13:53 ` Miguel Ojeda
1 sibling, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-26 13:03 UTC (permalink / raw)
To: Kent Overstreet
Cc: H. Peter Anvin, Alice Ryhl, Linus Torvalds, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On Tue, Feb 25, 2025 at 1:21 PM Kent Overstreet
<kent.overstreet@linux.dev> wrote:
>
> On Tue, Feb 25, 2025 at 10:16:17AM -0800, H. Peter Anvin wrote:
> >
> > I do have to say one thing about the standards process: it forces a
> > real specification to be written, as in a proper interface contract,
> > including the corner cases (which of course may be "undefined", but
> > the idea is that even what is out of scope is clear.)
>
> Did it, though?
>
> The C standard didn't really define undefined behaviour in a
> particularly useful way, and the compiler folks have always used it as a
> shield to hide behind - "look! the standard says we can!", even though
> that standard hasn't meaninfully changed it decades. It ossified things.
>
> Whereas the Rust process seems to me to be more defined by actual
> conversations with users and a focus on practicality and steady
> improvement towards meaningful goals - i.e. concrete specifications.
> There's been a lot of work towards those.
>
> You don't need a standards body to have specifications.
Some have claimed that a full specification for aliasing missing
makes unsafe Rust harder than it otherwise would be. Though
there is work on specifications as far as I understand it.
One worry I do have, is that the aliasing rules being officially
tied to LLVM instead of having its own separate specification,
may make it harder for other compilers like gccrs to implement
the same behavior for programs as rustc.
https://doc.rust-lang.org/stable/reference/behavior-considered-undefined.html
http://llvm.org/docs/LangRef.html#pointer-aliasing-rules
Interestingly, some other features of Rust are defined through C++
or implemented similar to C++.
https://doc.rust-lang.org/nomicon/atomics.html
"Rust pretty blatantly just inherits the memory model for
atomics from C++20. This is not due to this model being
particularly excellent or easy to understand."
https://rust-lang.github.io/rfcs/1236-stabilize-catch-panic.html
"Panics in Rust are currently implemented essentially as
a C++ exception under the hood. As a result, exception
safety is something that needs to be handled in Rust code
today."
Exception/unwind safety may be another subject that increases
the difficulty of writing unsafe Rust. At least the major or
aspiring Rust compilers, rustc and gccrs, are all sharing
code or infrastructure with C++ compilers, so C++ reuse in
the Rust language should not hinder making new major
compilers for Rust.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 12:36 ` Ventura Jack
@ 2025-02-26 13:52 ` Miguel Ojeda
2025-02-26 15:21 ` Ventura Jack
2025-02-26 14:14 ` Ralf Jung
1 sibling, 1 reply; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-26 13:52 UTC (permalink / raw)
To: Ventura Jack
Cc: Alice Ryhl, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux, Ralf Jung
On Wed, Feb 26, 2025 at 1:36 PM Ventura Jack <venturajack85@gmail.com> wrote:
>
> In a preprint paper, both stacked borrows and tree burrows are as
> far as I can tell described as having false positives.
>
> Are you sure that both stacked borrows and tree borrows are
> meant to be full models with no false positives and false negatives,
> and no uncertainty, if I understand you correctly? It should be
> noted that they are both works in progress.
I think you are mixing up two things: "a new model does not allow
every single unsafe code pattern out there" with "a new model, if
adopted, would still not be able to tell if something is UB or not".
> The aliasing rules being tied to a specific compiler backend,
> instead of a specification, might make it harder for other
> Rust compilers, like gccrs, to implement the same behavior for
> compiled programs, as what the sole major Rust compiler,
> rustc, has of behavior for compiled programs.
It is not "tied to a specific compiler backend". The reference (or
e.g. the standard library implementation, which you mentioned) may
mention LLVM, as well as other backends, but that does not imply the
final rules will (or need to) refer to the LLVM reference. And even if
a spec refers to a given revision of another spec (it is not
uncommon), that is different from being "tied to a specific compiler
backend".
Moreover, if it makes it easier, another compiler could always assume less.
> I am also skeptical of the apparent reliance on MIRI in the
> blog post and by some other Rust developers, since
> MiRI according to its own documentation cannot catch
> everything. It is better not to rely on a sanitizer for trying
> to figure out the correctness of a program. Sanitizers are
> useful for purposes like mitigation and debugging, not
> necessarily for determining correctness.
Please see the earlier reply from Ralf on this.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 13:03 ` Ventura Jack
@ 2025-02-26 13:53 ` Miguel Ojeda
2025-02-26 14:07 ` Ralf Jung
2025-02-26 14:26 ` James Bottomley
0 siblings, 2 replies; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-26 13:53 UTC (permalink / raw)
To: Ventura Jack
Cc: Kent Overstreet, H. Peter Anvin, Alice Ryhl, Linus Torvalds,
Gary Guo, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, ksummit, linux-kernel, rust-for-linux, Ralf Jung
On Wed, Feb 26, 2025 at 2:03 PM Ventura Jack <venturajack85@gmail.com> wrote:
>
> One worry I do have, is that the aliasing rules being officially
> tied to LLVM instead of having its own separate specification,
> may make it harder for other compilers like gccrs to implement
> the same behavior for programs as rustc.
I don't think they are (or rather, will be) "officially tied to LLVM".
> Interestingly, some other features of Rust are defined through C++
> or implemented similar to C++.
Of course, Rust has inherited a lot of ideas from other languages.
It is also not uncommon for specifications to refer to others, e.g.
C++ refers to ~10 documents, including C; and C refers to some too.
> Exception/unwind safety may be another subject that increases
> the difficulty of writing unsafe Rust.
Note that Rust panics in the kernel do not unwind.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 18:54 ` Linus Torvalds
2025-02-25 19:47 ` Kent Overstreet
@ 2025-02-26 13:54 ` Ralf Jung
2025-02-26 17:59 ` Linus Torvalds
1 sibling, 1 reply; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 13:54 UTC (permalink / raw)
To: Linus Torvalds, Alice Ryhl
Cc: Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
Hi all,
>> I think all of this worrying about Rust not having defined its
>> aliasing model is way overblown. Ultimately, the status quo is that
>> each unsafe operation that has to do with aliasing falls into one of
>> three categories:
>>
>> * This is definitely allowed.
>> * This is definitely UB.
>> * We don't know whether we want to allow this yet.
>
> Side note: can I please ask that the Rust people avoid the "UD" model
> as much as humanly possible?
>
> In particular, if there is something that is undefined behavior - even
> if it's in some "unsafe" mode, please please please make the rule be
> that
>
> (a) either the compiler ends up being constrained to doing things in
> some "naive" code generation
>
> or it's a clear UB situation, and
>
> (b) the compiler will warn about it
That would be lovely, wouldn't it?
Sadly, if you try to apply this principle at scale in a compiler that does
non-trivial optimizations, it is very unclear what this would even mean. I am
not aware of any systematic/rigorous description of compiler correctness in the
terms you are suggesting here. The only approach we know that we can actually
pull through systematically (in the sense of "at least in principle, we can
formally prove this correct") is to define the "visible behavior" of the source
program, the "visible behavior" of the generated assembly, and promise that they
are the same. (Or, more precisely, that the latter is a refinement of the
former.) So the Rust compiler promises nothing about the shape of the assembly
you will get, only about its "visible" behavior (and which exact memory access
occurs when is generally not considered "visible").
There is a *long* list of caveats here for things like FFI, volatile accesses,
and inline assembly. It is possible to deal with them systematically in this
framework, but spelling this out here would take too long. ;)
Once you are at a level of "visible behavior", there are a bunch of cases where
UB is the only option. The most obvious ones are out-of-bounds writes, and
calling a function pointer that doesn't point to valid code with the right ABI
and signature. There's just no way to constrain the effect on program behavior
that such an operation can have.
We also *do* want to let programmers explicitly tell the compiler "this code
path is unreachable, please just trust me on this and use that information for
your optimizations". This is a pretty powerful and useful primitive and gives
rise to things like unwrap_unchecked in Rust.
So our general stance in Rust is that we minimize as much as we can the cases
where there is UB. We avoid gratuitous UB e.g. for integer overflow or sequence
point violations. We guarantee there is no UB in entirely safe code. We provide
tooling, documentation, and diagnostics to inform programmers about UB and help
them understand what is and is not UB. (We're always open to suggestions for
better diagnostics.)
But if a program does have UB, then all bets are indeed off. We see UB as a
binding contract between programmer and compiler: the programmer promises to
never cause UB, the compiler in return promises to generate code whose "visible
behavior" matches that of the source program. There's a very pragmatic reason
for that (it's how LLVM works, and Rust wouldn't be where it is without LLVM
proving that it can compete with C/C++ on performance), but there's also the
reason mentioned above that it is not at all clear what the alternative would
actually look like, once you dig into it systematically (short of "don't
optimize unsafe code", which most people using unsafe for better performance
would dislike very much -- and "better performance" is one of the primary
reasons people reach for unsafe Rust).
In other words, in my view it's not the "unconstrained UB" model that is wrong
with C, it is *how easy* it is to accidentally make a promise to the compiler
that you cannot actually uphold. Having every single (signed) addition be a
binding promise is a disaster, of course nobody can keep up with all those
promises. Having an explicit "add_unchecked" be a promise is entirely fine and
there are cases where this can help generate a lot better code.
Having the use of an "&mut T" or "&T" reference be a promise is certainly more
subtle, and maybe too subtle, but my understanding is that the performance wins
from those assumptions even just on the Rust compiler itself are substantial.
Kind regards,
Ralf
>
> IOW, *please* avoid the C model of "Oh, I'll generate code that
> silently takes advantage of the fact that if I'm wrong, this case is
> undefined".
>
> And BTW, I think this is _particularly_ true for unsafe rust. Yes,
> it's "unsafe", but at the same time, the unsafe parts are the fragile
> parts and hopefully not _so_ hugely performance-critical that you need
> to do wild optimizations.
>
> So the cases I'm talking about is literally re-ordering accesses past
> each other ("Hey, I don't know if these alias or not, but based on
> some paper standard - rather than the source code - I will assume they
> do not"), and things like integer overflow behavior ("Oh, maybe this
> overflows and gives a different answer than the naive case that the
> source code implies, but overflow is undefined so I can screw it up").
>
> I'd just like to point to one case where the C standards body seems to
> have actually at least consider improving on undefined behavior (so
> credit where credit is due, since I often complain about the C
> standards body):
>
> https://www9.open-std.org/JTC1/SC22/WG14/www/docs/n3203.htm
>
> where the original "this is undefined" came from the fact that
> compilers were simple and restricting things like evaluation order
> caused lots of problems. These days, a weak ordering definition causes
> *many* more problems, and compilers are much smarter, and just saying
> that the code has to act as if there was a strict ordering of
> operations still allows almost all the normal optimizations in
> practice.
>
> This is just a general "please avoid the idiocies of the past". The
> potential code generation improvements are not worth the pain.
>
> Linus
>
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 22:42 ` Miguel Ojeda
@ 2025-02-26 14:01 ` Ralf Jung
0 siblings, 0 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 14:01 UTC (permalink / raw)
To: Miguel Ojeda, Kent Overstreet
Cc: Linus Torvalds, Alice Ryhl, Ventura Jack, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
Hi all,
>> I think the one thing that's missing w.r.t. aliasing that Rust could
>> maybe use is a kasan-style sanitizer, I think with treeborrows and "now
>> we have an actual model for aliasing optimizations" it should be possible
>> to write such a sanitizer. But the amount of code doing complicated
>> enough stuff with unsafe should really be quite small, so - shouldn't be
>
> Miri implements those models and can check code for conformance. It
> can be used easily in the Rust playground (top-right corner -> Tools
> -> Miri):
>
> https://play.rust-lang.org
>
> However, it does not work when you involved C FFI, though, but you can
> play there. For more advanced usage, e.g. testing a particular model
> like Tree Borrows, I think you need to use it locally, since I am not
> sure if flags can be passed yet.
>
> I would like to get it, plus other tools, into Compiler Explorer, see
> e.g. https://github.com/compiler-explorer/compiler-explorer/issues/2563.
By default (and on the playground), Miri will check Stacked Borrows rules. Those
are almost always *more strict* than Tree Borrows rules.
Unfortunately playground does not let you pass your own flags, so yeah getting
Miri on godbolt would be great. :D
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 13:53 ` Miguel Ojeda
@ 2025-02-26 14:07 ` Ralf Jung
2025-02-26 14:26 ` James Bottomley
1 sibling, 0 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 14:07 UTC (permalink / raw)
To: Miguel Ojeda, Ventura Jack
Cc: Kent Overstreet, H. Peter Anvin, Alice Ryhl, Linus Torvalds,
Gary Guo, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, ksummit, linux-kernel, rust-for-linux
Hi all,
On 26.02.25 14:53, Miguel Ojeda wrote:
> On Wed, Feb 26, 2025 at 2:03 PM Ventura Jack <venturajack85@gmail.com> wrote:
>>
>> One worry I do have, is that the aliasing rules being officially
>> tied to LLVM instead of having its own separate specification,
>> may make it harder for other compilers like gccrs to implement
>> the same behavior for programs as rustc.
>
> I don't think they are (or rather, will be) "officially tied to LLVM".
We do link to the LLVM aliasing rules from the reference, as VJ correctly
pointed out. This is basically a placeholder: we absolutely do *not* want Rust
to be tied to LLVM's aliasing rules, but we also are not yet ready to commit to
our own rules. (The ongoing work on Stacked Borrows and Tree Borrows has been
discussed elsewhere in this thread.)
Maybe we should remove that link from the reference. It just makes us look more
tied to LLVM than we are.
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 12:36 ` Ventura Jack
2025-02-26 13:52 ` Miguel Ojeda
@ 2025-02-26 14:14 ` Ralf Jung
2025-02-26 15:40 ` Ventura Jack
1 sibling, 1 reply; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 14:14 UTC (permalink / raw)
To: Ventura Jack, Alice Ryhl
Cc: Linus Torvalds, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
Hi all,
> Tree borrows is, as far as I can tell, the successor to stacked borrows.
>
> https://perso.crans.org/vanille/treebor/
> "Tree Borrows is a proposed alternative to Stacked Borrows that
> fulfills the same role: to analyse the execution of Rust code at
> runtime and define the precise requirements of the aliasing
> constraints."
>
> In a preprint paper, both stacked borrows and tree burrows are as
> far as I can tell described as having false positives.
>
> https://perso.crans.org/vanille/treebor/aux/preprint.pdf
> "This overcomes the aforementioned limitations: our evaluation
> on the 30 000 most widely used Rust crates shows that Tree
> Borrows rejects 54% fewer test cases than Stacked Borrows does."
>
> That paper also refers specifically to LLVM.
>
> https://perso.crans.org/vanille/treebor/aux/preprint.pdf
> "Tree Borrows (like Stacked Borrows) was designed with this in
> mind, so that a Rust program that complies with the rules of Tree
> Borrows should translate into an LLVM IR program that satisfies
> all the assumptions implied by noalias."
>
> Are you sure that both stacked borrows and tree borrows are
> meant to be full models with no false positives and false negatives,
> and no uncertainty, if I understand you correctly?
Speaking as an author of both models: yes. These models are candidates for the
*definition* of which programs are correct and which are not. In that sense,
once adopted, the model *becomes* the baseline, and by definition has no false
negative or false positives.
> It should be
> noted that they are both works in progress.
>
> MIRI is also used a lot like a sanitizer, and that means that MIRI
> cannot in general ensure that a program has no undefined
> behavior/memory safety bugs, only at most that a given test run
> did not violate the model. So if the test runs do not cover all
> possible runs, UB may still hide.
That is true: if coverage is incomplete or there is non-determinism, Miri can
miss bugs. Miri does testing, not verification. (However, verification tools are
in the works as well, and thanks to Miri we have a very good idea of what
exactly it is that these tools have to check for.)
However, unlike sanitizers, Miri can at least catch every UB that arises *in a
given execution*, since it does model the *entire* Abstract Machine of Rust.
And since we are part of the Rust project, we are doing everything we can to
ensure that this is the *same* Abstract machine as what the compiler implements.
This is the big difference to C, where the standard is too ambiguous to uniquely
give rise to a single Abstract Machine, and where we are very far from having a
tool that fully implements the Abstract Machine of C in a way that is consistent
with a widely-used compiler, and that can be practically used to test real-world
code.
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-25 23:34 ` Kent Overstreet
2025-02-26 11:57 ` Gary Guo
@ 2025-02-26 14:26 ` Ventura Jack
1 sibling, 0 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-26 14:26 UTC (permalink / raw)
To: Kent Overstreet
Cc: Linus Torvalds, Alice Ryhl, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Tue, Feb 25, 2025 at 4:34 PM Kent Overstreet
<kent.overstreet@linux.dev> wrote:
>
> On Tue, Feb 25, 2025 at 01:24:42PM -0800, Linus Torvalds wrote:
> > What we do know works is hard rules based on provenance. All compilers
> > will happily do sane alias analysis based on "this is a variable that
> > I created, I know it cannot alias with anything else, because I didn't
> > expose the address to anything else".
>
> Yep. That's what all this is based on.
>
> > So *provenance*-based aliasing works, but it only works in contexts
> > where you can see the provenance. Having some way to express
> > provenance across functions (and not *just* at allocation time) might
> > be a good model.
>
> We have that! That's exactly what lifetime annotations are.
>
> We don't have that for raw pointers, but I'm not sure that would ever be
> needed since you use raw pointers in small and localized places, and a
> lot of the places where aliasing comes up in C (e.g. memmove()) you
> express differently in Rust, with slices and indices.
>
> (You want to drop from references to raw pointers at the last possible
> moment).
The Rust community in general warns a lot against unsafe Rust, and
encourages developers to write as little unsafe Rust as possible,
or avoid it entirely. And multiple blog posts have been written
claiming that unsafe Rust is harder than C as well as C++.
I will link some of the blog posts upon request, I have linked some
of them in other emails.
And there have been undefined behavior/memory safety bugs
in Rust projects, both in the Rust standard library (which has a lot
of unsafe Rust relative to many other Rust projects) and in
other Rust projects.
https://nvd.nist.gov/vuln/detail/CVE-2024-27308
Amazon Web Services, possibly the biggest Rust developer employer,
initiated last year a project for formal verification of the Rust standard
library.
However, due to various reasons such as the general difficulty of
formal verification, the project is crowd-sourced.
https://aws.amazon.com/blogs/opensource/verify-the-safety-of-the-rust-standard-library/
"Verifying the Rust libraries is difficult because: 1/ lack of a
specification, 2/ lack of an existing verification mechanism
in the Rust ecosystem, 3/ the large size of the verification
problem, and 4/ the unknowns of scalable verification. Given
the magnitude and scope of the effort, we believe that a single
team would be unable to make significant inroads. Our
approach is to create a community owned effort."
All in all, unsafe Rust appears very difficult in practice, and tools
like MIRI, while very good, does not catch everything, and share
many of the advantages and disadvantages of sanitizers.
Would unsafe Rust have been substantially easier if Rust did not
have pervasive aliasing optimizations? If a successor language
to Rust also includes the safe-unsafe divide, but does not have
pervasive aliasing optimizations, that may yield an indication of
an answer to that question. Especially if such a language only
uses aliasing optimizations when the compiler, not the
programmer, proves it is safe to do those optimizations.
Rust is very unlikely to skip its aliasing optimizations, since it is one
major reason why Rust has often had comparable, or sometimes
better, performance than C and C++ in some benchmarks, despite
some runtime checks as I understand it in Rust.
> And besides, a lot of the places where aliasing comes up in C are
> already gone in Rust, there's a lot of little things that help.
> Algebraic data types are a big one, since a lot of the sketchy hackery
> that goes on in C where aliasing is problematic is just working around
> the lack of ADTs.
Algebraic data types/tagged unions, together with pattern matching,
are indeed excellent. But they are independent of Rust's novel features,
they are part of the functional programming tradition, and they have
been added to many old and new mainstream programming
languages. They are low-hanging fruits. They help not only with
avoiding undefined behavior/memory safety bugs, but also with
general correctness, maintainability, etc.
C seems to avoid features that would bring it closer to C++, and C
is seemingly kept simple, but otherwise it should not be difficult to
add them to C. C's simplicity makes it easier to write new C compilers.
Though these days people often write backends for GCC or LLVM,
as I understand it.
If you, the Linux kernel community, really want these low-hanging
fruits, I suspect that you might be able to get the C standards
people to do it. Little effort, a lot of benefit for all your new or
refactored C code.
C++ has std::variant, but no pattern matching. Neither of the two
pattern matching proposals for C++26 were accepted, but C++29
will almost certainly have pattern matching.
Curiously, C++ does not have C's "restrict" keyword.
> > But in the absence of knowledge, and in the absence of
> > compiler-imposed rules (and "unsafe" is by *definition* that absence),
> > I think the only rule that works is "don't assume they don't alias".
>
> Well, for the vast body of Rust code that's been written that just
> doesn't seem to be the case, and I think it's been pretty well
> demonstrated that anything we can do in C, we can also do just as
> effectively in Rust.
>
> treeborrow is already merged into Miri - this stuff is pretty far along.
>
> Now if you're imagining directly translating all the old grotty C code I
> know you have in your head - yeah, that won't work. But we already knew
> that.
Yet the Rust community encourages not to use unsafe Rust when
it is possible to not use it, and many have claimed in the Rust
community that unsafe Rust is harder than C and C++. And there
is still only one major Rust compiler and no specification, unlike
for C.
As for tree borrows, it is not yet used by default in MIRI as far as
I can tell, when I ran MIRI against an example with UB, I got a
warning that said that the Stacked Borrows rules are still
experimental. I am guessing that you have to use a flag to enable
tree borrows.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 13:53 ` Miguel Ojeda
2025-02-26 14:07 ` Ralf Jung
@ 2025-02-26 14:26 ` James Bottomley
2025-02-26 14:37 ` Ralf Jung
` (2 more replies)
1 sibling, 3 replies; 194+ messages in thread
From: James Bottomley @ 2025-02-26 14:26 UTC (permalink / raw)
To: Miguel Ojeda, Ventura Jack
Cc: Kent Overstreet, H. Peter Anvin, Alice Ryhl, Linus Torvalds,
Gary Guo, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, ksummit, linux-kernel, rust-for-linux, Ralf Jung
On Wed, 2025-02-26 at 14:53 +0100, Miguel Ojeda wrote:
> On Wed, Feb 26, 2025 at 2:03 PM Ventura Jack
> <venturajack85@gmail.com> wrote:
[...]
> > Exception/unwind safety may be another subject that increases
> > the difficulty of writing unsafe Rust.
>
> Note that Rust panics in the kernel do not unwind.
I presume someone is working on this, right? While rust isn't
pervasive enough yet for this to cause a problem, dumping a backtrace
is one of the key things we need to diagnose how something went wrong,
particularly for user bug reports where they can't seem to bisect.
Regards,
James
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 14:26 ` James Bottomley
@ 2025-02-26 14:37 ` Ralf Jung
2025-02-26 14:39 ` Greg KH
2025-02-26 17:11 ` Miguel Ojeda
2 siblings, 0 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 14:37 UTC (permalink / raw)
To: James Bottomley, Miguel Ojeda, Ventura Jack
Cc: Kent Overstreet, H. Peter Anvin, Alice Ryhl, Linus Torvalds,
Gary Guo, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, ksummit, linux-kernel, rust-for-linux
On 26.02.25 15:26, James Bottomley wrote:
> On Wed, 2025-02-26 at 14:53 +0100, Miguel Ojeda wrote:
>> On Wed, Feb 26, 2025 at 2:03 PM Ventura Jack
>> <venturajack85@gmail.com> wrote:
> [...]
>>> Exception/unwind safety may be another subject that increases
>>> the difficulty of writing unsafe Rust.
>>
>> Note that Rust panics in the kernel do not unwind.
>
> I presume someone is working on this, right? While rust isn't
> pervasive enough yet for this to cause a problem, dumping a backtrace
> is one of the key things we need to diagnose how something went wrong,
> particularly for user bug reports where they can't seem to bisect.
Rust panics typically print a backtrace even if they don't unwind. This works
just fine in userland, but I don't know the state in the kernel.
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 14:26 ` James Bottomley
2025-02-26 14:37 ` Ralf Jung
@ 2025-02-26 14:39 ` Greg KH
2025-02-26 14:45 ` James Bottomley
2025-02-26 17:11 ` Miguel Ojeda
2 siblings, 1 reply; 194+ messages in thread
From: Greg KH @ 2025-02-26 14:39 UTC (permalink / raw)
To: James Bottomley
Cc: Miguel Ojeda, Ventura Jack, Kent Overstreet, H. Peter Anvin,
Alice Ryhl, Linus Torvalds, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, hch, ksummit, linux-kernel,
rust-for-linux, Ralf Jung
On Wed, Feb 26, 2025 at 09:26:50AM -0500, James Bottomley wrote:
> On Wed, 2025-02-26 at 14:53 +0100, Miguel Ojeda wrote:
> > On Wed, Feb 26, 2025 at 2:03 PM Ventura Jack
> > <venturajack85@gmail.com> wrote:
> [...]
> > > Exception/unwind safety may be another subject that increases
> > > the difficulty of writing unsafe Rust.
> >
> > Note that Rust panics in the kernel do not unwind.
>
> I presume someone is working on this, right? While rust isn't
> pervasive enough yet for this to cause a problem, dumping a backtrace
> is one of the key things we need to diagnose how something went wrong,
> particularly for user bug reports where they can't seem to bisect.
The backtrace is there, just like any other call to BUG() provides,
which is what the rust framework calls for this.
Try it and see!
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 14:39 ` Greg KH
@ 2025-02-26 14:45 ` James Bottomley
2025-02-26 16:00 ` Steven Rostedt
0 siblings, 1 reply; 194+ messages in thread
From: James Bottomley @ 2025-02-26 14:45 UTC (permalink / raw)
To: Greg KH
Cc: Miguel Ojeda, Ventura Jack, Kent Overstreet, H. Peter Anvin,
Alice Ryhl, Linus Torvalds, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, hch, ksummit, linux-kernel,
rust-for-linux, Ralf Jung
On Wed, 2025-02-26 at 15:39 +0100, Greg KH wrote:
> On Wed, Feb 26, 2025 at 09:26:50AM -0500, James Bottomley wrote:
> > On Wed, 2025-02-26 at 14:53 +0100, Miguel Ojeda wrote:
> > > On Wed, Feb 26, 2025 at 2:03 PM Ventura Jack
> > > <venturajack85@gmail.com> wrote:
> > [...]
> > > > Exception/unwind safety may be another subject that increases
> > > > the difficulty of writing unsafe Rust.
> > >
> > > Note that Rust panics in the kernel do not unwind.
> >
> > I presume someone is working on this, right? While rust isn't
> > pervasive enough yet for this to cause a problem, dumping a
> > backtrace is one of the key things we need to diagnose how
> > something went wrong, particularly for user bug reports where they
> > can't seem to bisect.
>
> The backtrace is there, just like any other call to BUG() provides,
> which is what the rust framework calls for this.
From some other rust boot system work, I know that the quality of a
simple backtrace in rust where you just pick out addresses you think
you know in the stack and print them as symbols can sometimes be rather
misleading, which is why you need an unwinder to tell you exactly what
happened.
Regards,
James
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 11:34 ` Ralf Jung
@ 2025-02-26 14:57 ` Ventura Jack
2025-02-26 16:32 ` Ralf Jung
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-26 14:57 UTC (permalink / raw)
To: Ralf Jung
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Wed, Feb 26, 2025 at 4:34 AM Ralf Jung <post@ralfj.de> wrote:
>
> Let me also reply to some statements made further up-thread by Ventura Jack (in
> <https://lore.kernel.org/rust-for-linux/CAFJgqgSqMO724SQxinNqVGCGc7=ibUvVq-f7Qk1=S3A47Mr-ZQ@mail.gmail.com/>):
>
> > - Aliasing in Rust is not opt-in or opt-out,
> > it is always on.
> > https://doc.rust-lang.org/nomicon/aliasing.html
>
> This is true, but only for references. There are no aliasing requirements on raw
> pointers. There *are* aliasing requirements if you mix references and raw
> pointers to the same location, so if you want to do arbitrary aliasing you have
> to make sure you use only raw pointers, no references. So unlike in C, you have
> a way to opt-out entirely within standard Rust.
Fair, though I did have this list item:
- Applies to certain pointer kinds in Rust, namely
Rust "references".
Rust pointer kinds:
https://doc.rust-lang.org/reference/types/pointer.html
where I wrote that the aliasing rules apply to Rust "references".
>
> > - Rust has not defined its aliasing model.
>
> Correct. But then, neither has C. The C aliasing rules are described in English
> prose that is prone to ambiguities and misintepretation. The strict aliasing
> analysis implemented in GCC is not compatible with how most people read the
> standard (https://bugs.llvm.org/show_bug.cgi?id=21725). There is no tool to
> check whether code follows the C aliasing rules, and due to the aforementioned
> ambiguities it would be hard to write such a tool and be sure it interprets the
> standard the same way compilers do.
>
> For Rust, we at least have two candidate models that are defined in full
> mathematical rigor, and a tool that is widely used in the community, ensuring
> the models match realistic use of Rust.
But it is much more significant for Rust than for C, at least in
regards to C's "restrict", since "restrict" is rarely used in C, while
aliasing optimizations are pervasive in Rust. For C's "strict aliasing",
I think you have a good point, but "strict aliasing" is still easier to
reason about in my opinion than C's "restrict". Especially if you
never have any type casts of any kind nor union type punning.
And there have been claims in blog posts and elsewhere in the
Rust community that unsafe Rust is harder than C and C++.
>
> > - The aliasing rules in Rust are possibly as hard or
> > harder than for C "restrict", and it is not possible to
> > opt out of aliasing in Rust, which is cited by some
> > as one of the reasons for unsafe Rust being
> > harder than C.
>
> That is not quite correct; it is possible to opt-out by using raw pointers.
Again, I did have this list item:
- Applies to certain pointer kinds in Rust, namely
Rust "references".
Rust pointer kinds:
https://doc.rust-lang.org/reference/types/pointer.html
where I wrote that the aliasing rules apply to Rust "references".
> > the aliasing rules, may try to rely on MIRI. MIRI is
> > similar to a sanitizer for C, with similar advantages and
> > disadvantages. MIRI uses both the stacked borrow
> > and the tree borrow experimental research models.
> > MIRI, like sanitizers, does not catch everything, though
> > MIRI has been used to find undefined behavior/memory
> > safety bugs in for instance the Rust standard library.
>
> Unlike sanitizers, Miri can actually catch everything. However, since the exact
> details of what is and is not UB in Rust are still being worked out, we cannot
> yet make in good conscience a promise saying "Miri catches all UB". However, as
> the Miri README states:
> "To the best of our knowledge, all Undefined Behavior that has the potential to
> affect a program's correctness is being detected by Miri (modulo bugs), but you
> should consult the Reference for the official definition of Undefined Behavior.
> Miri will be updated with the Rust compiler to protect against UB as it is
> understood by the current compiler, but it makes no promises about future
> versions of rustc."
> See the Miri README (https://github.com/rust-lang/miri/?tab=readme-ov-file#miri)
> for further details and caveats regarding non-determinism.
>
> So, the situation for Rust here is a lot better than it is in C. Unfortunately,
> running kernel code in Miri is not currently possible; figuring out how to
> improve that could be an interesting collaboration.
I do not believe that you are correct when you write:
"Unlike sanitizers, Miri can actually catch everything."
Critically and very importantly, unless I am mistaken about MIRI, and
similar to sanitizers, MIRI only checks with runtime tests. That means
that MIRI will not catch any undefined behavior that a test does
not encounter. If a project's test coverage is poor, MIRI will not
check a lot of the code when run with those tests. Please do
correct me if I am mistaken about this. I am guessing that you
meant this as well, but I do not get the impression that it is
clear from your post.
Further, MIRI, similar to sanitizers, runs much more slowly than
regular tests. I have heard numbers of MIRI running 50x slower
than the tests when not run with MIRI. This blog post claims
400x running time in one case.
https://zackoverflow.dev/writing/unsafe-rust-vs-zig/
"The interpreter isn’t exactly fast, from what I’ve observed
it’s more than 400x slower. Regular Rust can run the tests
I wrote in less than a second, but Miri takes several minutes."
This does not count against MIRI, since it is similar to some
other sanitizers, as I understand it. But it does mean that MIRI
has some similar advantages and disadvantages to sanitizers.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 13:52 ` Miguel Ojeda
@ 2025-02-26 15:21 ` Ventura Jack
2025-02-26 16:06 ` Ralf Jung
2025-02-26 17:49 ` Miguel Ojeda
0 siblings, 2 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-26 15:21 UTC (permalink / raw)
To: Miguel Ojeda
Cc: Alice Ryhl, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux, Ralf Jung
On Wed, Feb 26, 2025 at 6:52 AM Miguel Ojeda
<miguel.ojeda.sandonis@gmail.com> wrote:
>
> On Wed, Feb 26, 2025 at 1:36 PM Ventura Jack <venturajack85@gmail.com> wrote:
> >
> > In a preprint paper, both stacked borrows and tree burrows are as
> > far as I can tell described as having false positives.
> >
> > Are you sure that both stacked borrows and tree borrows are
> > meant to be full models with no false positives and false negatives,
> > and no uncertainty, if I understand you correctly? It should be
> > noted that they are both works in progress.
>
> I think you are mixing up two things: "a new model does not allow
> every single unsafe code pattern out there" with "a new model, if
> adopted, would still not be able to tell if something is UB or not".
I am not certain that I understand either you or Alice correctly.
But Ralf Jung or others will probably help clarify matters.
> > The aliasing rules being tied to a specific compiler backend,
> > instead of a specification, might make it harder for other
> > Rust compilers, like gccrs, to implement the same behavior for
> > compiled programs, as what the sole major Rust compiler,
> > rustc, has of behavior for compiled programs.
>
> It is not "tied to a specific compiler backend". The reference (or
> e.g. the standard library implementation, which you mentioned) may
> mention LLVM, as well as other backends, but that does not imply the
> final rules will (or need to) refer to the LLVM reference. And even if
> a spec refers to a given revision of another spec (it is not
> uncommon), that is different from being "tied to a specific compiler
> backend".
>
> Moreover, if it makes it easier, another compiler could always assume less.
You are right that I should have written "currently tied", not "tied", and
I do hope and assume that the work with aliasing will result
in some sorts of specifications.
The language reference directly referring to LLVM's aliasing rules,
and that the preprint paper also refers to LLVM, does indicate a tie-in,
even if that tie-in is incidental and not desired. With more than one
major compiler, such tie-ins are easier to avoid.
https://doc.rust-lang.org/stable/reference/behavior-considered-undefined.html
"Breaking the pointer aliasing rules
http://llvm.org/docs/LangRef.html#pointer-aliasing-rules
. Box<T>, &mut T and &T follow LLVM’s scoped noalias
http://llvm.org/docs/LangRef.html#noalias
model, except if the &T contains an UnsafeCell<U>.
References and boxes must not be dangling while they are
live. The exact liveness duration is not specified, but some
bounds exist:"
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 14:14 ` Ralf Jung
@ 2025-02-26 15:40 ` Ventura Jack
2025-02-26 16:10 ` Ralf Jung
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-26 15:40 UTC (permalink / raw)
To: Ralf Jung
Cc: Alice Ryhl, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On Wed, Feb 26, 2025 at 7:14 AM Ralf Jung <post@ralfj.de> wrote:
>
> Hi all,
>
> > [Omitted]
> >
> > Are you sure that both stacked borrows and tree borrows are
> > meant to be full models with no false positives and false negatives,
> > and no uncertainty, if I understand you correctly?
>
> Speaking as an author of both models: yes. These models are candidates for the
> *definition* of which programs are correct and which are not. In that sense,
> once adopted, the model *becomes* the baseline, and by definition has no false
> negative or false positives.
Thank you for the answer, that clarifies matters for me.
> [Omitted] (However, verification tools are
> in the works as well, and thanks to Miri we have a very good idea of what
> exactly it is that these tools have to check for.) [Omitted]
Verification as in static verification? That is some interesting and
exciting stuff if so.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 14:45 ` James Bottomley
@ 2025-02-26 16:00 ` Steven Rostedt
2025-02-26 16:42 ` James Bottomley
0 siblings, 1 reply; 194+ messages in thread
From: Steven Rostedt @ 2025-02-26 16:00 UTC (permalink / raw)
To: James Bottomley
Cc: Greg KH, Miguel Ojeda, Ventura Jack, Kent Overstreet,
H. Peter Anvin, Alice Ryhl, Linus Torvalds, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, hch, ksummit, linux-kernel,
rust-for-linux, Ralf Jung
On Wed, 26 Feb 2025 09:45:53 -0500
James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> >From some other rust boot system work, I know that the quality of a
> simple backtrace in rust where you just pick out addresses you think
> you know in the stack and print them as symbols can sometimes be rather
> misleading, which is why you need an unwinder to tell you exactly what
> happened.
One thing I learned at GNU Cauldron last year is that the kernel folks use
the term "unwinding" incorrectly. Unwinding to the compiler folks mean
having full access to all the frames and variables and what not for all the
previous functions.
What the kernel calls "unwinding" the compiler folks call "stack walking".
That's a much easier task than doing an unwinding, and that is usually all
we need when something crashes.
That may be the confusion here.
-- Steve
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 15:21 ` Ventura Jack
@ 2025-02-26 16:06 ` Ralf Jung
2025-02-26 17:49 ` Miguel Ojeda
1 sibling, 0 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 16:06 UTC (permalink / raw)
To: Ventura Jack, Miguel Ojeda
Cc: Alice Ryhl, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
Hi all,
> You are right that I should have written "currently tied", not "tied", and
> I do hope and assume that the work with aliasing will result
> in some sorts of specifications.
>
> The language reference directly referring to LLVM's aliasing rules,
> and that the preprint paper also refers to LLVM, does indicate a tie-in,
> even if that tie-in is incidental and not desired. With more than one
> major compiler, such tie-ins are easier to avoid.
>
> https://doc.rust-lang.org/stable/reference/behavior-considered-undefined.html
> "Breaking the pointer aliasing rules
> http://llvm.org/docs/LangRef.html#pointer-aliasing-rules
> . Box<T>, &mut T and &T follow LLVM’s scoped noalias
> http://llvm.org/docs/LangRef.html#noalias
> model, except if the &T contains an UnsafeCell<U>.
> References and boxes must not be dangling while they are
> live. The exact liveness duration is not specified, but some
> bounds exist:"
The papers mention LLVM since LLVM places a key constraint on the Rust model:
every program that is well-defined in Rust must also be well-defined in
LLVM+noalias. We could design our models completely in empty space and come up
with something theoretically beautiful, but the fact of the matter is that Rust
wants LLVM's noalias-based optimizations, and so a model that cannot justify
those is pretty much dead at arrival.
Not sure if that qualifies as us "tying" ourselves to LLVM -- mostly it just
ensures that in our papers we don't come up with a nonsense model that's useless
in practice. :)
The only real tie that exists is that LLVM is the main codegen backend for Rust,
so we strongly care about what it takes to get LLVM to generate good code. We
are aware of this as a potential concern for over-fitting the model, and are
trying to take that into account. So far, the main cases of over-fitting we are
having is that we often make something allowed (not UB) in Rust "because we
can", because it is not UB in LLVM -- and that is a challenge for gcc-rs
whenever C has more UB than LLVM, and GCC follows C (some cases where this
occurs: comparing dead/dangling pointers with "==", comparing entirely unrelated
pointers with "<", doing memcpy with a size of 0 [but C is allowing this soon so
GCC will have to adjust anyway], creating but never using an out-of-bounds
pointer with `wrapping_offset`). But I think that's fine (for gcc-rs to work, it
puts pressure on GCC to support these operations efficiently without UB, which I
don't think is a bad thing); it gets concerning only once we make *more* things
UB than we would otherwise for no good reason other than "LLVM says so". I don't
think we are doing that. I think what we did in the aliasing model is entirely
reasonable and can be justified based on optimization benefits and the structure
of how Rust lifetimes and function scopes interact, but this is a subjective
judgment calls and reasonable people could disagree on this.
The bigger problem is people doing interesting memory management shenanigans via
FFI, and it being not clear whether and how LLVM has considered those
shenanigans in their model, so on the Rust side we can't tell users "this is
fine" until we have an "ok" from the LLVM side -- and meanwhile people do use
those same patterns in C without worrying about it. It can then take a while
until we have convinced LLVM to officially give us (and clang) the guarantees
that clang users have been assuming already for a while.
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 15:40 ` Ventura Jack
@ 2025-02-26 16:10 ` Ralf Jung
2025-02-26 16:50 ` Ventura Jack
0 siblings, 1 reply; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 16:10 UTC (permalink / raw)
To: Ventura Jack
Cc: Alice Ryhl, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
Hi,
>> [Omitted] (However, verification tools are
>> in the works as well, and thanks to Miri we have a very good idea of what
>> exactly it is that these tools have to check for.) [Omitted]
>
> Verification as in static verification? That is some interesting and
> exciting stuff if so.
Yes. There's various projects, from bounded model checkers (Kani) that can
"only" statically guarantee "all executions that run loops at most N times are
fine" to full-fledged static verification tools (Gillian-Rust, VeriFast, Verus,
Prusti, RefinedRust -- just to mention the ones that support unsafe code). None
of the latter tools is production-ready yet, and some will always stay research
prototypes, but there's a lot of work going on, and having a precise model of
the entire Abstract Machine that is blessed by the compiler devs (i.e., Miri) is
a key part for this to work. It'll be even better when this Abstract Machine
exists not just implicitly in Miri but explicitly in a Rust Specification, and
is subject to stability guarantees -- and we'll get there, but it'll take some
more time. :)
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 14:57 ` Ventura Jack
@ 2025-02-26 16:32 ` Ralf Jung
2025-02-26 18:09 ` Ventura Jack
2025-02-26 19:07 ` Martin Uecker
0 siblings, 2 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 16:32 UTC (permalink / raw)
To: Ventura Jack
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
Hi VJ,
>>
>>> - Rust has not defined its aliasing model.
>>
>> Correct. But then, neither has C. The C aliasing rules are described in English
>> prose that is prone to ambiguities and misintepretation. The strict aliasing
>> analysis implemented in GCC is not compatible with how most people read the
>> standard (https://bugs.llvm.org/show_bug.cgi?id=21725). There is no tool to
>> check whether code follows the C aliasing rules, and due to the aforementioned
>> ambiguities it would be hard to write such a tool and be sure it interprets the
>> standard the same way compilers do.
>>
>> For Rust, we at least have two candidate models that are defined in full
>> mathematical rigor, and a tool that is widely used in the community, ensuring
>> the models match realistic use of Rust.
>
> But it is much more significant for Rust than for C, at least in
> regards to C's "restrict", since "restrict" is rarely used in C, while
> aliasing optimizations are pervasive in Rust. For C's "strict aliasing",
> I think you have a good point, but "strict aliasing" is still easier to
> reason about in my opinion than C's "restrict". Especially if you
> never have any type casts of any kind nor union type punning.
Is it easier to reason about? At least GCC got it wrong, making no-aliasing
assumptions that are not justified by most people's interpretation of the model:
https://bugs.llvm.org/show_bug.cgi?id=21725
(But yes that does involve unions.)
>>> - The aliasing rules in Rust are possibly as hard or
>>> harder than for C "restrict", and it is not possible to
>>> opt out of aliasing in Rust, which is cited by some
>>> as one of the reasons for unsafe Rust being
>>> harder than C.
>>
>> That is not quite correct; it is possible to opt-out by using raw pointers.
>
> Again, I did have this list item:
>
> - Applies to certain pointer kinds in Rust, namely
> Rust "references".
> Rust pointer kinds:
> https://doc.rust-lang.org/reference/types/pointer.html
>
> where I wrote that the aliasing rules apply to Rust "references".
Okay, fair. But it is easy to misunderstand the other items in your list in
isolation.
>
>>> the aliasing rules, may try to rely on MIRI. MIRI is
>>> similar to a sanitizer for C, with similar advantages and
>>> disadvantages. MIRI uses both the stacked borrow
>>> and the tree borrow experimental research models.
>>> MIRI, like sanitizers, does not catch everything, though
>>> MIRI has been used to find undefined behavior/memory
>>> safety bugs in for instance the Rust standard library.
>>
>> Unlike sanitizers, Miri can actually catch everything. However, since the exact
>> details of what is and is not UB in Rust are still being worked out, we cannot
>> yet make in good conscience a promise saying "Miri catches all UB". However, as
>> the Miri README states:
>> "To the best of our knowledge, all Undefined Behavior that has the potential to
>> affect a program's correctness is being detected by Miri (modulo bugs), but you
>> should consult the Reference for the official definition of Undefined Behavior.
>> Miri will be updated with the Rust compiler to protect against UB as it is
>> understood by the current compiler, but it makes no promises about future
>> versions of rustc."
>> See the Miri README (https://github.com/rust-lang/miri/?tab=readme-ov-file#miri)
>> for further details and caveats regarding non-determinism.
>>
>> So, the situation for Rust here is a lot better than it is in C. Unfortunately,
>> running kernel code in Miri is not currently possible; figuring out how to
>> improve that could be an interesting collaboration.
>
> I do not believe that you are correct when you write:
>
> "Unlike sanitizers, Miri can actually catch everything."
>
> Critically and very importantly, unless I am mistaken about MIRI, and
> similar to sanitizers, MIRI only checks with runtime tests. That means
> that MIRI will not catch any undefined behavior that a test does
> not encounter. If a project's test coverage is poor, MIRI will not
> check a lot of the code when run with those tests. Please do
> correct me if I am mistaken about this. I am guessing that you
> meant this as well, but I do not get the impression that it is
> clear from your post.
Okay, I may have misunderstood what you mean by "catch everything". All
sanitizers miss some UB that actually occurs in the given execution. This is
because they are inserted in the pipeline after a bunch of compiler-specific
choices have already been made, potentially masking some UB. I'm not aware of a
sanitizer for sequence point violations. I am not aware of a sanitizer for
strict aliasing or restrict. I am not aware of a sanitizer that detects UB due
to out-of-bounds pointer arithmetic (I am not talking about OOB accesses; just
the arithmetic is already UB), or UB due to violations of "pointer lifetime end
zapping", or UB due to comparing pointers derived from different allocations. Is
there a sanitizer that correctly models what exactly happens when a struct with
padding gets copied? The padding must be reset to be considered "uninitialized",
even if the entire struct was zero-initialized before. Most compilers implement
such a copy as memcpy; a sanitizer would then miss this UB.
In contrast, Miri checks for all the UB that is used anywhere in the Rust
compiler -- everything else would be a critical bug in either Miri or the compiler.
But yes, it only does so on the code paths you are actually testing. And yes, it
is very slow.
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 16:00 ` Steven Rostedt
@ 2025-02-26 16:42 ` James Bottomley
2025-02-26 16:47 ` Kent Overstreet
0 siblings, 1 reply; 194+ messages in thread
From: James Bottomley @ 2025-02-26 16:42 UTC (permalink / raw)
To: Steven Rostedt
Cc: Greg KH, Miguel Ojeda, Ventura Jack, Kent Overstreet,
H. Peter Anvin, Alice Ryhl, Linus Torvalds, Gary Guo, airlied,
boqun.feng, david.laight.linux, hch, ksummit, linux-kernel,
rust-for-linux, Ralf Jung
On Wed, 2025-02-26 at 11:00 -0500, Steven Rostedt wrote:
> On Wed, 26 Feb 2025 09:45:53 -0500
> James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
>
> > > From some other rust boot system work, I know that the quality of
> > > a
> > simple backtrace in rust where you just pick out addresses you
> > think you know in the stack and print them as symbols can sometimes
> > be rather misleading, which is why you need an unwinder to tell you
> > exactly what happened.
>
> One thing I learned at GNU Cauldron last year is that the kernel
> folks use the term "unwinding" incorrectly. Unwinding to the compiler
> folks mean having full access to all the frames and variables and
> what not for all the previous functions.
>
> What the kernel calls "unwinding" the compiler folks call "stack
> walking". That's a much easier task than doing an unwinding, and that
> is usually all we need when something crashes.
Well, that's not the whole story. We do have at least three unwinders
in the code base. You're right in that we don't care about anything
other than the call trace embedded in the frame, so a lot of unwind
debug information isn't relevant to us and the unwinders ignore it. In
the old days we just used to use the GUESS unwinder which looks for
addresses inside the text segment in the stack and prints them in
order. Now we (at least on amd64) use the ORC unwinder because it
gives better traces:
https://docs.kernel.org/arch/x86/orc-unwinder.html
while we don't need full unwind in rust, we do need enough to get
traces working.
Regards,
James
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 16:42 ` James Bottomley
@ 2025-02-26 16:47 ` Kent Overstreet
2025-02-26 16:57 ` Steven Rostedt
0 siblings, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-26 16:47 UTC (permalink / raw)
To: James Bottomley
Cc: Steven Rostedt, Greg KH, Miguel Ojeda, Ventura Jack,
H. Peter Anvin, Alice Ryhl, Linus Torvalds, Gary Guo, airlied,
boqun.feng, david.laight.linux, hch, ksummit, linux-kernel,
rust-for-linux, Ralf Jung
On Wed, Feb 26, 2025 at 11:42:41AM -0500, James Bottomley wrote:
> On Wed, 2025-02-26 at 11:00 -0500, Steven Rostedt wrote:
> > On Wed, 26 Feb 2025 09:45:53 -0500
> > James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> >
> > > > From some other rust boot system work, I know that the quality of
> > > > a
> > > simple backtrace in rust where you just pick out addresses you
> > > think you know in the stack and print them as symbols can sometimes
> > > be rather misleading, which is why you need an unwinder to tell you
> > > exactly what happened.
> >
> > One thing I learned at GNU Cauldron last year is that the kernel
> > folks use the term "unwinding" incorrectly. Unwinding to the compiler
> > folks mean having full access to all the frames and variables and
> > what not for all the previous functions.
> >
> > What the kernel calls "unwinding" the compiler folks call "stack
> > walking". That's a much easier task than doing an unwinding, and that
> > is usually all we need when something crashes.
>
> Well, that's not the whole story. We do have at least three unwinders
> in the code base. You're right in that we don't care about anything
> other than the call trace embedded in the frame, so a lot of unwind
> debug information isn't relevant to us and the unwinders ignore it. In
> the old days we just used to use the GUESS unwinder which looks for
> addresses inside the text segment in the stack and prints them in
> order. Now we (at least on amd64) use the ORC unwinder because it
> gives better traces:
>
> https://docs.kernel.org/arch/x86/orc-unwinder.html
More accurate perhaps, but I still don't see it working reliably - I'm x
still having to switch all my test setups (and users) to frame pointers
if I want to be able to debug reliably.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 16:10 ` Ralf Jung
@ 2025-02-26 16:50 ` Ventura Jack
2025-02-26 21:39 ` Ralf Jung
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-26 16:50 UTC (permalink / raw)
To: Ralf Jung
Cc: Alice Ryhl, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On Wed, Feb 26, 2025 at 9:10 AM Ralf Jung <post@ralfj.de> wrote:
>
> Hi,
>
> >> [Omitted] (However, verification tools are
> >> in the works as well, and thanks to Miri we have a very good idea of what
> >> exactly it is that these tools have to check for.) [Omitted]
> >
> > Verification as in static verification? That is some interesting and
> > exciting stuff if so.
>
> Yes. There's various projects, from bounded model checkers (Kani) that can
> "only" statically guarantee "all executions that run loops at most N times are
> fine" to full-fledged static verification tools (Gillian-Rust, VeriFast, Verus,
> Prusti, RefinedRust -- just to mention the ones that support unsafe code). None
> of the latter tools is production-ready yet, and some will always stay research
> prototypes, but there's a lot of work going on, and having a precise model of
> the entire Abstract Machine that is blessed by the compiler devs (i.e., Miri) is
> a key part for this to work. It'll be even better when this Abstract Machine
> exists not just implicitly in Miri but explicitly in a Rust Specification, and
> is subject to stability guarantees -- and we'll get there, but it'll take some
> more time. :)
>
> Kind regards,
> Ralf
>
Thank you for the answer. Almost all of those projects look active,
though Prusti's GitHub repository has not had commit activity for many
months. Do you know if any of the projects are using stacked borrows
or tree borrows yet? Gillian-Rust does not seem to use stacked borrows
or tree borrows. Verus mentions stacked borrows in "related work" in
one paper. On the other hand, RefinedRust reuses code from Miri.
It does sound exciting. It reminds me in some ways of Scala. Though
also like advanced research where some practical goals for the
language (Rust) have not yet been reached.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 16:47 ` Kent Overstreet
@ 2025-02-26 16:57 ` Steven Rostedt
2025-02-26 17:41 ` Kent Overstreet
0 siblings, 1 reply; 194+ messages in thread
From: Steven Rostedt @ 2025-02-26 16:57 UTC (permalink / raw)
To: Kent Overstreet
Cc: James Bottomley, Greg KH, Miguel Ojeda, Ventura Jack,
H. Peter Anvin, Alice Ryhl, Linus Torvalds, Gary Guo, airlied,
boqun.feng, david.laight.linux, hch, ksummit, linux-kernel,
rust-for-linux, Ralf Jung, Josh Poimboeuf
[ Adding Josh ]
On Wed, 26 Feb 2025 11:47:09 -0500
Kent Overstreet <kent.overstreet@linux.dev> wrote:
> On Wed, Feb 26, 2025 at 11:42:41AM -0500, James Bottomley wrote:
> > On Wed, 2025-02-26 at 11:00 -0500, Steven Rostedt wrote:
> > > On Wed, 26 Feb 2025 09:45:53 -0500
> > > James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> > >
> > > > > From some other rust boot system work, I know that the quality of
> > > > > a
> > > > simple backtrace in rust where you just pick out addresses you
> > > > think you know in the stack and print them as symbols can sometimes
> > > > be rather misleading, which is why you need an unwinder to tell you
> > > > exactly what happened.
> > >
> > > One thing I learned at GNU Cauldron last year is that the kernel
> > > folks use the term "unwinding" incorrectly. Unwinding to the compiler
> > > folks mean having full access to all the frames and variables and
> > > what not for all the previous functions.
> > >
> > > What the kernel calls "unwinding" the compiler folks call "stack
> > > walking". That's a much easier task than doing an unwinding, and that
> > > is usually all we need when something crashes.
> >
> > Well, that's not the whole story. We do have at least three unwinders
> > in the code base. You're right in that we don't care about anything
> > other than the call trace embedded in the frame, so a lot of unwind
> > debug information isn't relevant to us and the unwinders ignore it. In
> > the old days we just used to use the GUESS unwinder which looks for
> > addresses inside the text segment in the stack and prints them in
> > order. Now we (at least on amd64) use the ORC unwinder because it
> > gives better traces:
> >
> > https://docs.kernel.org/arch/x86/orc-unwinder.html
Note, both myself and Josh (creator of ORC) were arguing with the GCC folks
until we all figured out we were talking about two different things. Once
they said "Oh, you mean stack walking. Yeah that can work" and the
arguments stopped. Lessons learned that day was that compiler folks take
the term "unwinding" to mean much more than kernel folks, and since we have
compiler folks on this thread, I'd figure I would point that out.
We still use the term "unwinder" in the kernel, but during the sframe
meetings, we need to point out that we all just care about stack walking.
>
> More accurate perhaps, but I still don't see it working reliably - I'm x
> still having to switch all my test setups (and users) to frame pointers
> if I want to be able to debug reliably.
Really? The entire point of ORC was to have accurate stack traces so that
live kernel patching can work. If there's something incorrect, then please
report it.
-- Steve
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 14:26 ` James Bottomley
2025-02-26 14:37 ` Ralf Jung
2025-02-26 14:39 ` Greg KH
@ 2025-02-26 17:11 ` Miguel Ojeda
2025-02-26 17:42 ` Kent Overstreet
2 siblings, 1 reply; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-26 17:11 UTC (permalink / raw)
To: James Bottomley
Cc: Ventura Jack, Kent Overstreet, H. Peter Anvin, Alice Ryhl,
Linus Torvalds, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, ksummit, linux-kernel, rust-for-linux, Ralf Jung
On Wed, Feb 26, 2025 at 3:26 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> On Wed, 2025-02-26 at 14:53 +0100, Miguel Ojeda wrote:
> > On Wed, Feb 26, 2025 at 2:03 PM Ventura Jack
> > <venturajack85@gmail.com> wrote:
> [...]
> > > Exception/unwind safety may be another subject that increases
> > > the difficulty of writing unsafe Rust.
> >
> > Note that Rust panics in the kernel do not unwind.
>
> I presume someone is working on this, right? While rust isn't
> pervasive enough yet for this to cause a problem, dumping a backtrace
> is one of the key things we need to diagnose how something went wrong,
> particularly for user bug reports where they can't seem to bisect.
Ventura Jack was talking about "exception safety", referring to the
complexity of having to take into account additional execution exit
paths that run destructors in the middle of doing something else and
the possibility of those exceptions getting caught. This does affect
Rust when built with the unwinding "panic mode", similar to C++.
In the kernel, we build Rust in its aborting "panic mode", which
simplifies reasoning about it, because destructors do not run and you
cannot catch exceptions (you could still cause mischief, though,
because it does not necessarily kill the kernel entirely, since it
maps to `BUG()` currently).
In other words, Ventura Jack and my message were not referring to
walking the frames for backtraces.
I hope that clarifies.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 16:57 ` Steven Rostedt
@ 2025-02-26 17:41 ` Kent Overstreet
2025-02-26 17:47 ` Steven Rostedt
0 siblings, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-26 17:41 UTC (permalink / raw)
To: Steven Rostedt
Cc: James Bottomley, Greg KH, Miguel Ojeda, Ventura Jack,
H. Peter Anvin, Alice Ryhl, Linus Torvalds, Gary Guo, airlied,
boqun.feng, david.laight.linux, hch, ksummit, linux-kernel,
rust-for-linux, Ralf Jung, Josh Poimboeuf
On Wed, Feb 26, 2025 at 11:57:26AM -0500, Steven Rostedt wrote:
>
> [ Adding Josh ]
>
> On Wed, 26 Feb 2025 11:47:09 -0500
> Kent Overstreet <kent.overstreet@linux.dev> wrote:
>
> > On Wed, Feb 26, 2025 at 11:42:41AM -0500, James Bottomley wrote:
> > > On Wed, 2025-02-26 at 11:00 -0500, Steven Rostedt wrote:
> > > > On Wed, 26 Feb 2025 09:45:53 -0500
> > > > James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> > > >
> > > > > > From some other rust boot system work, I know that the quality of
> > > > > > a
> > > > > simple backtrace in rust where you just pick out addresses you
> > > > > think you know in the stack and print them as symbols can sometimes
> > > > > be rather misleading, which is why you need an unwinder to tell you
> > > > > exactly what happened.
> > > >
> > > > One thing I learned at GNU Cauldron last year is that the kernel
> > > > folks use the term "unwinding" incorrectly. Unwinding to the compiler
> > > > folks mean having full access to all the frames and variables and
> > > > what not for all the previous functions.
> > > >
> > > > What the kernel calls "unwinding" the compiler folks call "stack
> > > > walking". That's a much easier task than doing an unwinding, and that
> > > > is usually all we need when something crashes.
> > >
> > > Well, that's not the whole story. We do have at least three unwinders
> > > in the code base. You're right in that we don't care about anything
> > > other than the call trace embedded in the frame, so a lot of unwind
> > > debug information isn't relevant to us and the unwinders ignore it. In
> > > the old days we just used to use the GUESS unwinder which looks for
> > > addresses inside the text segment in the stack and prints them in
> > > order. Now we (at least on amd64) use the ORC unwinder because it
> > > gives better traces:
> > >
> > > https://docs.kernel.org/arch/x86/orc-unwinder.html
>
> Note, both myself and Josh (creator of ORC) were arguing with the GCC folks
> until we all figured out we were talking about two different things. Once
> they said "Oh, you mean stack walking. Yeah that can work" and the
> arguments stopped. Lessons learned that day was that compiler folks take
> the term "unwinding" to mean much more than kernel folks, and since we have
> compiler folks on this thread, I'd figure I would point that out.
>
> We still use the term "unwinder" in the kernel, but during the sframe
> meetings, we need to point out that we all just care about stack walking.
>
> >
> > More accurate perhaps, but I still don't see it working reliably - I'm x
> > still having to switch all my test setups (and users) to frame pointers
> > if I want to be able to debug reliably.
>
> Really? The entire point of ORC was to have accurate stack traces so that
> live kernel patching can work. If there's something incorrect, then please
> report it.
It's been awhile since I've looked at one, I've been just automatically
switching back to frame pointers for awhile, but - I never saw
inaccurate backtraces, just failure to generate a backtrace - if memory
serves.
When things die down a bit more I might be able to switch back and see
if I get something reportable, I'm still in bug crunching mode :)
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 17:11 ` Miguel Ojeda
@ 2025-02-26 17:42 ` Kent Overstreet
0 siblings, 0 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-26 17:42 UTC (permalink / raw)
To: Miguel Ojeda
Cc: James Bottomley, Ventura Jack, H. Peter Anvin, Alice Ryhl,
Linus Torvalds, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, ksummit, linux-kernel, rust-for-linux, Ralf Jung
On Wed, Feb 26, 2025 at 06:11:53PM +0100, Miguel Ojeda wrote:
> On Wed, Feb 26, 2025 at 3:26 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> >
> > On Wed, 2025-02-26 at 14:53 +0100, Miguel Ojeda wrote:
> > > On Wed, Feb 26, 2025 at 2:03 PM Ventura Jack
> > > <venturajack85@gmail.com> wrote:
> > [...]
> > > > Exception/unwind safety may be another subject that increases
> > > > the difficulty of writing unsafe Rust.
> > >
> > > Note that Rust panics in the kernel do not unwind.
> >
> > I presume someone is working on this, right? While rust isn't
> > pervasive enough yet for this to cause a problem, dumping a backtrace
> > is one of the key things we need to diagnose how something went wrong,
> > particularly for user bug reports where they can't seem to bisect.
>
> Ventura Jack was talking about "exception safety", referring to the
> complexity of having to take into account additional execution exit
> paths that run destructors in the middle of doing something else and
> the possibility of those exceptions getting caught. This does affect
> Rust when built with the unwinding "panic mode", similar to C++.
>
> In the kernel, we build Rust in its aborting "panic mode", which
> simplifies reasoning about it, because destructors do not run and you
> cannot catch exceptions (you could still cause mischief, though,
> because it does not necessarily kill the kernel entirely, since it
> maps to `BUG()` currently).
>
> In other words, Ventura Jack and my message were not referring to
> walking the frames for backtraces.
>
> I hope that clarifies.
However, if Rust in the kernel does get full unwinding, that opens up
interesting possibilities - Rust with "no unsafe, whitelisted list of
dependencies" could potentially replace BPF with something _much_ more
ergonomic and practical.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 17:41 ` Kent Overstreet
@ 2025-02-26 17:47 ` Steven Rostedt
2025-02-26 22:07 ` Josh Poimboeuf
2025-03-02 12:19 ` David Laight
0 siblings, 2 replies; 194+ messages in thread
From: Steven Rostedt @ 2025-02-26 17:47 UTC (permalink / raw)
To: Kent Overstreet
Cc: James Bottomley, Greg KH, Miguel Ojeda, Ventura Jack,
H. Peter Anvin, Alice Ryhl, Linus Torvalds, Gary Guo, airlied,
boqun.feng, david.laight.linux, hch, ksummit, linux-kernel,
rust-for-linux, Ralf Jung, Josh Poimboeuf
On Wed, 26 Feb 2025 12:41:30 -0500
Kent Overstreet <kent.overstreet@linux.dev> wrote:
> It's been awhile since I've looked at one, I've been just automatically
> switching back to frame pointers for awhile, but - I never saw
> inaccurate backtraces, just failure to generate a backtrace - if memory
> serves.
OK, maybe if the bug was bad enough, it couldn't get access to the ORC
tables for some reason. Not having a backtrace on crash is not as bad as
incorrect back traces, as the former is happening when the system is dieing
and live kernel patching doesn't help with that.
>
> When things die down a bit more I might be able to switch back and see
> if I get something reportable, I'm still in bug crunching mode :)
Appreciate it.
Thanks,
-- Steve
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 15:21 ` Ventura Jack
2025-02-26 16:06 ` Ralf Jung
@ 2025-02-26 17:49 ` Miguel Ojeda
2025-02-26 18:36 ` Ventura Jack
1 sibling, 1 reply; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-26 17:49 UTC (permalink / raw)
To: Ventura Jack
Cc: Alice Ryhl, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux, Ralf Jung
On Wed, Feb 26, 2025 at 4:21 PM Ventura Jack <venturajack85@gmail.com> wrote:
>
> I am not certain that I understand either you or Alice correctly.
> But Ralf Jung or others will probably help clarify matters.
When you said:
"In a preprint paper, both stacked borrows and tree burrows
are as far as I can tell described as having false positives."
I think that you mean to say that the new model allows/rejects
something that unsafe code out there wants/doesn't want to do. That is
fine and expected, although of course it would be great to have a
model that is simple, fits perfectly all the code out there and
optimizes well.
However, that is very different from what you say afterwards:
"Are you sure that both stacked borrows and tree borrows are
meant to be full models with no false positives and false negatives,"
Which I read as you thinking that the new model doesn't say whether a
given program has UB or not.
Thus I think you are using the phrase "false positives" to refer to
two different things.
> You are right that I should have written "currently tied", not "tied", and
> I do hope and assume that the work with aliasing will result
> in some sorts of specifications.
>
> The language reference directly referring to LLVM's aliasing rules,
> and that the preprint paper also refers to LLVM, does indicate a tie-in,
> even if that tie-in is incidental and not desired. With more than one
> major compiler, such tie-ins are easier to avoid.
Ralf, who is pretty much the top authority on this as far as I
understand, already clarified this:
"we absolutely do *not* want Rust to be tied to LLVM's aliasing rules"
The paper mentioning LLVM to explain something does not mean the model
is tied to LLVM.
And the Rust reference, which you quote, is not the Rust specification
-- not yet at least. From its introduction:
"should not be taken as a specification for the Rust language"
When the Rust specification is finally published, if they still refer
to LLVM (in a normative way), then we could say it is tied, yes.
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 13:54 ` Ralf Jung
@ 2025-02-26 17:59 ` Linus Torvalds
2025-02-26 19:01 ` Paul E. McKenney
` (3 more replies)
0 siblings, 4 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-26 17:59 UTC (permalink / raw)
To: Ralf Jung
Cc: Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 at 05:54, Ralf Jung <post@ralfj.de> wrote:
>
> The only approach we know that we can actually
> pull through systematically (in the sense of "at least in principle, we can
> formally prove this correct") is to define the "visible behavior" of the source
> program, the "visible behavior" of the generated assembly, and promise that they
> are the same.
That's literally what I ask for with that "naive" code generation, you
just stated it much better.
I think some of the C standards problems came from the fact that at
some point the standards people decided that the only way to specify
the language was from a high-level language _syntax_ standpoint.
Which is odd, because a lot of the original C semantics came from
basically a "this is how the result works". It's where a lot of the
historical C architecture-defined (and undefined) details come from:
things like how integer division rounding happens, how shifts bigger
than the word size are undefined, etc. But most tellingly, it's how
"volatile" was defined.
I suspect that what happened is that the C++ people hated the volatile
definition *so* much (because of how they changed what an "access"
means), that they then poisoned the C standards body against
specifying behavior in terms of how the code *acts*, and made all
subsequent C standards rules be about some much more abstract
higher-level model that could not ever talk about actual code
generation, only about syntax.
And that was a fundamental shift, and not a good one.
It caused basically insurmountable problems for the memory model
descriptions. Paul McKenney tried to introduce the RCU memory model
requirements into the C memory model discussion, and it was entirely
impossible. You can't describe memory models in terms of types and
syntax abstractions. You *have* to talk about what it means for the
actual code generation.
The reason? The standards people wanted to describe the memory model
not at a "this is what the program does" level, but at the "this is
the type system and the syntactic rules" level. So the RCU accesses
had to be defined in terms of the type system, but the actual language
rules for the RCU accesses are about how the data is then used after
the load.
(We have various memory model documentation in
tools/memory-model/Documentation and that goes into the RCU rules in
*much* more detail, but simplified and much shortened: a
"rcu_dereference()" could be seen as a much weaker form of
"load_acquire": it's a barrier only to accesses that are
data-dependencies, and if you turn a data dependency into a control
dependency you have to then add specific barriers.
When a variable access is no longer about "this loads this value from
memory", but is something much more high-level, trying to describe
that is complete chaos. Plus the description gets to be so abstract
that nobody understands it - neither the user of the language nor the
person implementing the compiler.
So I am personally - after having seen that complete failure as a
by-stander - 100% convinced that the semantics of a language *should*
be defined in terms of behavior, not in terms of syntax and types.
Sure, you have to describe the syntax and type system *too*, but then
you use those to explain the behavior and use the behavior to explain
what the allowable optimizations are.
> So the Rust compiler promises nothing about the shape of the assembly
> you will get, only about its "visible" behavior
Oh, absolutely. That should be the basic rule of optimization: you can
do anything AT ALL, as long as the visible behavior is the same.
> (and which exact memory access occurs when is generally
> not considered "visible").
.. but this really has to be part of it. It's obviously part of it
when there might be aliases, but it's also part of it when there is
_any_ question about threading and/or memory ordering.
And just as an example: threading fundamentally introduces a notion of
"aliasing" because different *threads* can access the same location
concurrently. And that actually has real effects that a good language
absolutely needs to deal with, even when there is absolutely *no*
memory ordering or locking in the source code.
For example, it means that you cannot ever widen stores unless you
know that the data you are touching is thread-local. Because the bytes
*next* to you may not be things that you control.
It also *should* mean that a language must never *ever* rematerialize
memory accesses (again, unless thread-local).
Seriously - I consider memory access rematerialization a huge bug, and
both a security and correctness issue. I think it should be expressly
forbidden in *any* language that claims to be reliablel.
Rematerialization of memory accesses is a bug, and is *hugely* visible
in the end result. It introduces active security issues and makes
TOCTOU (Time-of-check to time-of-use) a much bigger problem than it
needs to be.
So memory accesses need to be part of the "visible" rules.
I claim that C got that right with "volatile". What C got wrong was to
move away from that concept, and _only_ have "volatile" defined in
those terms. Because "volatile" on its own is not very good (and that
"not very good" has nothing to do with the mess that C++ made of it).
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 16:32 ` Ralf Jung
@ 2025-02-26 18:09 ` Ventura Jack
2025-02-26 22:28 ` Ralf Jung
2025-02-26 19:07 ` Martin Uecker
1 sibling, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-26 18:09 UTC (permalink / raw)
To: Ralf Jung
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Wed, Feb 26, 2025 at 9:32 AM Ralf Jung <post@ralfj.de> wrote:
>
> Hi VJ,
>
> >>
> >>> - Rust has not defined its aliasing model.
> >>
> >> Correct. But then, neither has C. The C aliasing rules are described in English
> >> prose that is prone to ambiguities and misintepretation. The strict aliasing
> >> analysis implemented in GCC is not compatible with how most people read the
> >> standard (https://bugs.llvm.org/show_bug.cgi?id=21725). There is no tool to
> >> check whether code follows the C aliasing rules, and due to the aforementioned
> >> ambiguities it would be hard to write such a tool and be sure it interprets the
> >> standard the same way compilers do.
> >>
> >> For Rust, we at least have two candidate models that are defined in full
> >> mathematical rigor, and a tool that is widely used in the community, ensuring
> >> the models match realistic use of Rust.
> >
> > But it is much more significant for Rust than for C, at least in
> > regards to C's "restrict", since "restrict" is rarely used in C, while
> > aliasing optimizations are pervasive in Rust. For C's "strict aliasing",
> > I think you have a good point, but "strict aliasing" is still easier to
> > reason about in my opinion than C's "restrict". Especially if you
> > never have any type casts of any kind nor union type punning.
>
> Is it easier to reason about? At least GCC got it wrong, making no-aliasing
> assumptions that are not justified by most people's interpretation of the model:
> https://bugs.llvm.org/show_bug.cgi?id=21725
> (But yes that does involve unions.)
For that specific bug issue, there is a GitHub issue for it.
https://github.com/llvm/llvm-project/issues/22099
And the original test case appears to have been a compiler bug
and have been fixed, at least when I run on Godbolt against
a recent version of Clang. Another comment says.
"The original testcase seems to be fixed now but replacing
the union by allocated memory makes the problem come back."
And the new test case the user mentions involves a void pointer.
I wonder if they could close the issue and open a new issue
in its stead that only contains the currently relevant compiler
bugs if there are any. And have this new issue refer to the old
issue. They brought the old issue over from the old bug tracker.
But I do not have a good handle on that issue.
Unions in C, C++ and Rust (not Rust "enum"/tagged union) are
generally sharp. In Rust, it requires unsafe Rust to read from
a union.
> > [Omitted]
>
> Okay, fair. But it is easy to misunderstand the other items in your list in
> isolation.
I agree, I should have made it unambiguous and made each item
not require the context of other items, or have made the
dependencies between items clearer, or some other way.
I remember not liking the way I organized it, but did not
improve it before sending, apologies.
> >>
> >> [Omitted].
> >
> > I do not believe that you are correct when you write:
> >
> > "Unlike sanitizers, Miri can actually catch everything."
> >
> > Critically and very importantly, unless I am mistaken about MIRI, and
> > similar to sanitizers, MIRI only checks with runtime tests. That means
> > that MIRI will not catch any undefined behavior that a test does
> > not encounter. If a project's test coverage is poor, MIRI will not
> > check a lot of the code when run with those tests. Please do
> > correct me if I am mistaken about this. I am guessing that you
> > meant this as well, but I do not get the impression that it is
> > clear from your post.
>
> Okay, I may have misunderstood what you mean by "catch everything". All
> sanitizers miss some UB that actually occurs in the given execution. This is
> because they are inserted in the pipeline after a bunch of compiler-specific
> choices have already been made, potentially masking some UB. I'm not aware of a
> sanitizer for sequence point violations. I am not aware of a sanitizer for
> strict aliasing or restrict. I am not aware of a sanitizer that detects UB due
> to out-of-bounds pointer arithmetic (I am not talking about OOB accesses; just
> the arithmetic is already UB), or UB due to violations of "pointer lifetime end
> zapping", or UB due to comparing pointers derived from different allocations. Is
> there a sanitizer that correctly models what exactly happens when a struct with
> padding gets copied? The padding must be reset to be considered "uninitialized",
> even if the entire struct was zero-initialized before. Most compilers implement
> such a copy as memcpy; a sanitizer would then miss this UB.
>
> In contrast, Miri checks for all the UB that is used anywhere in the Rust
> compiler -- everything else would be a critical bug in either Miri or the compiler.
> But yes, it only does so on the code paths you are actually testing. And yes, it
> is very slow.
I may have been ambiguous again, or unclear or misleading,
I need to work on that.
The description you have here indicates that Miri is in many ways
significantly better than sanitizers in general.
I think it is more accurate of me to say that Miri in some aspects
shares some of the advantages and disadvantages of sanitizers,
and in other aspects is much better than sanitizers.
Is Miri the only one of its kind in the programming world?
There are not many system languages in mass use, and
those are the languages that first and foremost deal
with undefined behavior. That would make Miri extra impressive.
>
There are some issues in Rust that I am curious as to
your views on. rustc or the Rust language has some type
system holes, which still causes problems for rustc and
their developers.
https://github.com/lcnr/solver-woes/issues/1
https://github.com/rust-lang/rust/issues/75992
Those kinds of issues seem difficult to solve.
In your opinion, is it accurate to say that the Rust language
developers are working on a new type system for
Rust-the-language and a new solver for rustc, and that
they are trying to make the new type system and new solver
as backwards compatible as possible?
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 17:49 ` Miguel Ojeda
@ 2025-02-26 18:36 ` Ventura Jack
0 siblings, 0 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-26 18:36 UTC (permalink / raw)
To: Miguel Ojeda
Cc: Alice Ryhl, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux, Ralf Jung
On Wed, Feb 26, 2025 at 10:49 AM Miguel Ojeda
<miguel.ojeda.sandonis@gmail.com> wrote:
>
> On Wed, Feb 26, 2025 at 4:21 PM Ventura Jack <venturajack85@gmail.com> wrote:
> >
> > I am not certain that I understand either you or Alice correctly.
> > But Ralf Jung or others will probably help clarify matters.
>
> When you said:
>
> "In a preprint paper, both stacked borrows and tree burrows
> are as far as I can tell described as having false positives."
>
> I think that you mean to say that the new model allows/rejects
> something that unsafe code out there wants/doesn't want to do. That is
> fine and expected, although of course it would be great to have a
> model that is simple, fits perfectly all the code out there and
> optimizes well.
>
> However, that is very different from what you say afterwards:
>
> "Are you sure that both stacked borrows and tree borrows are
> meant to be full models with no false positives and false negatives,"
>
> Which I read as you thinking that the new model doesn't say whether a
> given program has UB or not.
>
> Thus I think you are using the phrase "false positives" to refer to
> two different things.
Ralf Jung explained matters well, I think I understood him. I found his
answer clearer than both your answers and Alice's on this topic.
> > You are right that I should have written "currently tied", not "tied", and
> > I do hope and assume that the work with aliasing will result
> > in some sorts of specifications.
> >
> > The language reference directly referring to LLVM's aliasing rules,
> > and that the preprint paper also refers to LLVM, does indicate a tie-in,
> > even if that tie-in is incidental and not desired. With more than one
> > major compiler, such tie-ins are easier to avoid.
>
> Ralf, who is pretty much the top authority on this as far as I
> understand, already clarified this:
>
> "we absolutely do *not* want Rust to be tied to LLVM's aliasing rules"
>
> The paper mentioning LLVM to explain something does not mean the model
> is tied to LLVM.
>
> And the Rust reference, which you quote, is not the Rust specification
> -- not yet at least. From its introduction:
>
> "should not be taken as a specification for the Rust language"
>
> When the Rust specification is finally published, if they still refer
> to LLVM (in a normative way), then we could say it is tied, yes.
"Currently tied" is accurate as far as I can tell. Ralf Jung
did explain it well. He suggested removing those links from the
Rust reference, as I understand him. But, importantly, having
more than 1 major Rust compiler would be very helpful in my opinion.
It is easy to accidentally or incidentally tie language definition
to compiler implementation, and having at least 2 major compilers
helps a lot with this. Ralf Jung described it as a risk of overfitting I think,
and that is a good description in my opinion.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 17:59 ` Linus Torvalds
@ 2025-02-26 19:01 ` Paul E. McKenney
2025-02-26 20:00 ` Martin Uecker
` (2 subsequent siblings)
3 siblings, 0 replies; 194+ messages in thread
From: Paul E. McKenney @ 2025-02-26 19:01 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ralf Jung, Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo,
airlied, boqun.feng, david.laight.linux, ej, gregkh, hch, hpa,
ksummit, linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On Wed, Feb 26, 2025 at 09:59:41AM -0800, Linus Torvalds wrote:
> On Wed, 26 Feb 2025 at 05:54, Ralf Jung <post@ralfj.de> wrote:
> >
> > The only approach we know that we can actually
> > pull through systematically (in the sense of "at least in principle, we can
> > formally prove this correct") is to define the "visible behavior" of the source
> > program, the "visible behavior" of the generated assembly, and promise that they
> > are the same.
>
> That's literally what I ask for with that "naive" code generation, you
> just stated it much better.
>
> I think some of the C standards problems came from the fact that at
> some point the standards people decided that the only way to specify
> the language was from a high-level language _syntax_ standpoint.
>
> Which is odd, because a lot of the original C semantics came from
> basically a "this is how the result works". It's where a lot of the
> historical C architecture-defined (and undefined) details come from:
> things like how integer division rounding happens, how shifts bigger
> than the word size are undefined, etc. But most tellingly, it's how
> "volatile" was defined.
>
> I suspect that what happened is that the C++ people hated the volatile
> definition *so* much (because of how they changed what an "access"
> means), that they then poisoned the C standards body against
> specifying behavior in terms of how the code *acts*, and made all
> subsequent C standards rules be about some much more abstract
> higher-level model that could not ever talk about actual code
> generation, only about syntax.
Yes, they really do seem to want something that can be analyzed in a
self-contained manner, without all of the mathematical inconveniences
posed by real-world hardware. :-(
> And that was a fundamental shift, and not a good one.
>
> It caused basically insurmountable problems for the memory model
> descriptions. Paul McKenney tried to introduce the RCU memory model
> requirements into the C memory model discussion, and it was entirely
> impossible. You can't describe memory models in terms of types and
> syntax abstractions. You *have* to talk about what it means for the
> actual code generation.
My current thought is to take care of dependency ordering with our
current coding standards combined with external tools to check these
[1], but if anyone has a better idea, please do not keep it a secret!
Thanx, Paul
[1] https://people.kernel.org/paulmck/the-immanent-deprecation-of-memory_order_consume
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 16:32 ` Ralf Jung
2025-02-26 18:09 ` Ventura Jack
@ 2025-02-26 19:07 ` Martin Uecker
2025-02-26 19:23 ` Ralf Jung
1 sibling, 1 reply; 194+ messages in thread
From: Martin Uecker @ 2025-02-26 19:07 UTC (permalink / raw)
To: Ralf Jung, Ventura Jack
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
Am Mittwoch, dem 26.02.2025 um 17:32 +0100 schrieb Ralf Jung:
> Hi VJ,
>
> > >
> > > > - Rust has not defined its aliasing model.
> > >
> > > Correct. But then, neither has C. The C aliasing rules are described in English
> > > prose that is prone to ambiguities and misintepretation. The strict aliasing
> > > analysis implemented in GCC is not compatible with how most people read the
> > > standard (https://bugs.llvm.org/show_bug.cgi?id=21725). There is no tool to
> > > check whether code follows the C aliasing rules, and due to the aforementioned
> > > ambiguities it would be hard to write such a tool and be sure it interprets the
> > > standard the same way compilers do.
> > >
> > > For Rust, we at least have two candidate models that are defined in full
> > > mathematical rigor, and a tool that is widely used in the community, ensuring
> > > the models match realistic use of Rust.
> >
> > But it is much more significant for Rust than for C, at least in
> > regards to C's "restrict", since "restrict" is rarely used in C, while
> > aliasing optimizations are pervasive in Rust. For C's "strict aliasing",
> > I think you have a good point, but "strict aliasing" is still easier to
> > reason about in my opinion than C's "restrict". Especially if you
> > never have any type casts of any kind nor union type punning.
>
> Is it easier to reason about? At least GCC got it wrong, making no-aliasing
> assumptions that are not justified by most people's interpretation of the model:
> https://bugs.llvm.org/show_bug.cgi?id=21725
> (But yes that does involve unions.)
Did you mean to say LLVM got this wrong? As far as I know,
the GCC TBBA code is more correct than LLVMs. It gets
type-changing stores correct that LLVM does not implement.
>
> > > > - The aliasing rules in Rust are possibly as hard or
> > > > harder than for C "restrict", and it is not possible to
> > > > opt out of aliasing in Rust, which is cited by some
> > > > as one of the reasons for unsafe Rust being
> > > > harder than C.
> > >
> > > That is not quite correct; it is possible to opt-out by using raw pointers.
> >
> > Again, I did have this list item:
> >
> > - Applies to certain pointer kinds in Rust, namely
> > Rust "references".
> > Rust pointer kinds:
> > https://doc.rust-lang.org/reference/types/pointer.html
> >
> > where I wrote that the aliasing rules apply to Rust "references".
>
> Okay, fair. But it is easy to misunderstand the other items in your list in
> isolation.
>
> >
> > > > the aliasing rules, may try to rely on MIRI. MIRI is
> > > > similar to a sanitizer for C, with similar advantages and
> > > > disadvantages. MIRI uses both the stacked borrow
> > > > and the tree borrow experimental research models.
> > > > MIRI, like sanitizers, does not catch everything, though
> > > > MIRI has been used to find undefined behavior/memory
> > > > safety bugs in for instance the Rust standard library.
> > >
> > > Unlike sanitizers, Miri can actually catch everything. However, since the exact
> > > details of what is and is not UB in Rust are still being worked out, we cannot
> > > yet make in good conscience a promise saying "Miri catches all UB". However, as
> > > the Miri README states:
> > > "To the best of our knowledge, all Undefined Behavior that has the potential to
> > > affect a program's correctness is being detected by Miri (modulo bugs), but you
> > > should consult the Reference for the official definition of Undefined Behavior.
> > > Miri will be updated with the Rust compiler to protect against UB as it is
> > > understood by the current compiler, but it makes no promises about future
> > > versions of rustc."
> > > See the Miri README (https://github.com/rust-lang/miri/?tab=readme-ov-file#miri)
> > > for further details and caveats regarding non-determinism.
> > >
> > > So, the situation for Rust here is a lot better than it is in C. Unfortunately,
> > > running kernel code in Miri is not currently possible; figuring out how to
> > > improve that could be an interesting collaboration.
> >
> > I do not believe that you are correct when you write:
> >
> > "Unlike sanitizers, Miri can actually catch everything."
> >
> > Critically and very importantly, unless I am mistaken about MIRI, and
> > similar to sanitizers, MIRI only checks with runtime tests. That means
> > that MIRI will not catch any undefined behavior that a test does
> > not encounter. If a project's test coverage is poor, MIRI will not
> > check a lot of the code when run with those tests. Please do
> > correct me if I am mistaken about this. I am guessing that you
> > meant this as well, but I do not get the impression that it is
> > clear from your post.
>
> Okay, I may have misunderstood what you mean by "catch everything". All
> sanitizers miss some UB that actually occurs in the given execution. This is
> because they are inserted in the pipeline after a bunch of compiler-specific
> choices have already been made, potentially masking some UB. I'm not aware of a
> sanitizer for sequence point violations. I am not aware of a sanitizer for
> strict aliasing or restrict. I am not aware of a sanitizer that detects UB due
> to out-of-bounds pointer arithmetic (I am not talking about OOB accesses; just
> the arithmetic is already UB), or UB due to violations of "pointer lifetime end
> zapping", or UB due to comparing pointers derived from different allocations. Is
> there a sanitizer that correctly models what exactly happens when a struct with
> padding gets copied? The padding must be reset to be considered "uninitialized",
> even if the entire struct was zero-initialized before. Most compilers implement
> such a copy as memcpy; a sanitizer would then miss this UB.
Note that reading padding bytes in C is not UB. Regarding
uninitialized variables, only automatic variables whose address
is not taken is UB in C. Although I suspect that compilers
have compliance isues here.
But yes, it sanitizers are still rather poor.
Martin
>
> In contrast, Miri checks for all the UB that is used anywhere in the Rust
> compiler -- everything else would be a critical bug in either Miri or the compiler.
> But yes, it only does so on the code paths you are actually testing. And yes, it
> is very slow.
>
> Kind regards,
> Ralf
>
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 19:07 ` Martin Uecker
@ 2025-02-26 19:23 ` Ralf Jung
2025-02-26 20:22 ` Martin Uecker
0 siblings, 1 reply; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 19:23 UTC (permalink / raw)
To: Martin Uecker, Ventura Jack
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
Hi all,
>>> But it is much more significant for Rust than for C, at least in
>>> regards to C's "restrict", since "restrict" is rarely used in C, while
>>> aliasing optimizations are pervasive in Rust. For C's "strict aliasing",
>>> I think you have a good point, but "strict aliasing" is still easier to
>>> reason about in my opinion than C's "restrict". Especially if you
>>> never have any type casts of any kind nor union type punning.
>>
>> Is it easier to reason about? At least GCC got it wrong, making no-aliasing
>> assumptions that are not justified by most people's interpretation of the model:
>> https://bugs.llvm.org/show_bug.cgi?id=21725
>> (But yes that does involve unions.)
>
> Did you mean to say LLVM got this wrong? As far as I know,
> the GCC TBBA code is more correct than LLVMs. It gets
> type-changing stores correct that LLVM does not implement.
Oh sorry, yes that is an LLVM bug link. I mixed something up. I could have sworn
there was a GCC bug, but I only found
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=57359> which has been fixed.
There was some problem with strong updates, i.e. the standard permits writes
through a `float*` pointer to memory that aliases an `int*`. The C aliasing
model only says it is UB to read data at the wrong type, but does not talk about
writes changing the type of memory.
Martin, maybe you remember better than me what that issue was / whether it is
still a problem?
>>>> So, the situation for Rust here is a lot better than it is in C. Unfortunately,
>>>> running kernel code in Miri is not currently possible; figuring out how to
>>>> improve that could be an interesting collaboration.
>>>
>>> I do not believe that you are correct when you write:
>>>
>>> "Unlike sanitizers, Miri can actually catch everything."
>>>
>>> Critically and very importantly, unless I am mistaken about MIRI, and
>>> similar to sanitizers, MIRI only checks with runtime tests. That means
>>> that MIRI will not catch any undefined behavior that a test does
>>> not encounter. If a project's test coverage is poor, MIRI will not
>>> check a lot of the code when run with those tests. Please do
>>> correct me if I am mistaken about this. I am guessing that you
>>> meant this as well, but I do not get the impression that it is
>>> clear from your post.
>>
>> Okay, I may have misunderstood what you mean by "catch everything". All
>> sanitizers miss some UB that actually occurs in the given execution. This is
>> because they are inserted in the pipeline after a bunch of compiler-specific
>> choices have already been made, potentially masking some UB. I'm not aware of a
>> sanitizer for sequence point violations. I am not aware of a sanitizer for
>> strict aliasing or restrict. I am not aware of a sanitizer that detects UB due
>> to out-of-bounds pointer arithmetic (I am not talking about OOB accesses; just
>> the arithmetic is already UB), or UB due to violations of "pointer lifetime end
>> zapping", or UB due to comparing pointers derived from different allocations. Is
>> there a sanitizer that correctly models what exactly happens when a struct with
>> padding gets copied? The padding must be reset to be considered "uninitialized",
>> even if the entire struct was zero-initialized before. Most compilers implement
>> such a copy as memcpy; a sanitizer would then miss this UB.
>
> Note that reading padding bytes in C is not UB. Regarding
> uninitialized variables, only automatic variables whose address
> is not taken is UB in C. Although I suspect that compilers
> have compliance isues here.
Hm, now I am wondering how clang is compliant here. To my knowledge, padding is
effectively reset to poison or undef on a copy (due to SROA), and clang marks
most integer types as "noundef", thus making it UB to ever have undef/poison in
such a value.
Kind regards,
Ralf
>
> But yes, it sanitizers are still rather poor.
>
> Martin
>
>>
>> In contrast, Miri checks for all the UB that is used anywhere in the Rust
>> compiler -- everything else would be a critical bug in either Miri or the compiler.
>> But yes, it only does so on the code paths you are actually testing. And yes, it
>> is very slow.
>>
>> Kind regards,
>> Ralf
>>
>
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 17:59 ` Linus Torvalds
2025-02-26 19:01 ` Paul E. McKenney
@ 2025-02-26 20:00 ` Martin Uecker
2025-02-26 21:14 ` Linus Torvalds
` (2 more replies)
2025-02-26 20:25 ` Kent Overstreet
2025-02-26 22:45 ` David Laight
3 siblings, 3 replies; 194+ messages in thread
From: Martin Uecker @ 2025-02-26 20:00 UTC (permalink / raw)
To: Linus Torvalds, Ralf Jung, Paul E. McKenney
Cc: Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
Am Mittwoch, dem 26.02.2025 um 09:59 -0800 schrieb Linus Torvalds:
> On Wed, 26 Feb 2025 at 05:54, Ralf Jung <post@ralfj.de> wrote:
> >
> > The only approach we know that we can actually
> > pull through systematically (in the sense of "at least in principle, we can
> > formally prove this correct") is to define the "visible behavior" of the source
> > program, the "visible behavior" of the generated assembly, and promise that they
> > are the same.
>
> That's literally what I ask for with that "naive" code generation, you
> just stated it much better.
The model is exactly the same as in C. One defines "observable
behavior" (to use C terminology) and compiler can do whatever it
wants as long as it preserves this.
Regarding undefined behavior, the idea the C standard had originally
was that compilers do something "naive" (e.g. what the architecture
does for some operation) or at least reasonable. This worked well
until modern optimizers started rather aggressively exploit
that there is UB. C and Rust are are in the same boat here.
As Ralf said, the difference is that Rust makes it much harder to
accidentally trigger UB.
>
> I think some of the C standards problems came from the fact that at
> some point the standards people decided that the only way to specify
> the language was from a high-level language _syntax_ standpoint.
>
> Which is odd, because a lot of the original C semantics came from
> basically a "this is how the result works". It's where a lot of the
> historical C architecture-defined (and undefined) details come from:
> things like how integer division rounding happens, how shifts bigger
> than the word size are undefined, etc. But most tellingly, it's how
> "volatile" was defined.
Compiler changed here, not the C standard. Of course, later the
compiler people in ISO WG14 may have pushed back against
*removing* UB or even clarifying things (e.g. TS 6010 is not in C23
because compiler people want to evaluate the impact on optimization
first)
>
> I suspect that what happened is that the C++ people hated the volatile
> definition *so* much (because of how they changed what an "access"
> means), that they then poisoned the C standards body against
> specifying behavior in terms of how the code *acts*, and made all
> subsequent C standards rules be about some much more abstract
> higher-level model that could not ever talk about actual code
> generation, only about syntax.
At least since C89 the model did not change.
For example, see "5.1.2.3 Program execution" in this draft
for C89:
https://www.open-std.org/JTC1/sc22/wg14/www/docs/n1256.pdf
C++ was not standardized until 1998.
> And that was a fundamental shift, and not a good one.
>
> It caused basically insurmountable problems for the memory model
> descriptions. Paul McKenney tried to introduce the RCU memory model
> requirements into the C memory model discussion, and it was entirely
> impossible. You can't describe memory models in terms of types and
> syntax abstractions. You *have* to talk about what it means for the
> actual code generation.
The C model for concurrency indeed came to C11 from C++. It is defined
in terms of accesses to memory objects and when those accesses
become visible to other threads.
>
> The reason? The standards people wanted to describe the memory model
> not at a "this is what the program does" level, but at the "this is
> the type system and the syntactic rules" level. So the RCU accesses
> had to be defined in terms of the type system, but the actual language
> rules for the RCU accesses are about how the data is then used after
> the load.
If your point is that this should be phrased in terms of atomic
accesses instead of accesses to atomic objects, then I absolutely
agree with you. This is something I tried to get fixed, but it
is difficult. The concurrency work mostly happens in WG21
and not WG14.
But still, the fundamental definition of the model is in terms
of accesses and when those become visible to other threads, and
not in terms of syntax and types.
>
> (We have various memory model documentation in
> tools/memory-model/Documentation and that goes into the RCU rules in
> *much* more detail, but simplified and much shortened: a
> "rcu_dereference()" could be seen as a much weaker form of
> "load_acquire": it's a barrier only to accesses that are
> data-dependencies, and if you turn a data dependency into a control
> dependency you have to then add specific barriers.
>
> When a variable access is no longer about "this loads this value from
> memory", but is something much more high-level, trying to describe
> that is complete chaos. Plus the description gets to be so abstract
> that nobody understands it - neither the user of the language nor the
> person implementing the compiler.
>
> So I am personally - after having seen that complete failure as a
> by-stander - 100% convinced that the semantics of a language *should*
> be defined in terms of behavior, not in terms of syntax and types.
> Sure, you have to describe the syntax and type system *too*, but then
> you use those to explain the behavior and use the behavior to explain
> what the allowable optimizations are.
>
> > So the Rust compiler promises nothing about the shape of the assembly
> > you will get, only about its "visible" behavior
>
> Oh, absolutely. That should be the basic rule of optimization: you can
> do anything AT ALL, as long as the visible behavior is the same.
>
> > (and which exact memory access occurs when is generally
> > not considered "visible").
>
> .. but this really has to be part of it. It's obviously part of it
> when there might be aliases, but it's also part of it when there is
> _any_ question about threading and/or memory ordering.
>
> And just as an example: threading fundamentally introduces a notion of
> "aliasing" because different *threads* can access the same location
> concurrently. And that actually has real effects that a good language
> absolutely needs to deal with, even when there is absolutely *no*
> memory ordering or locking in the source code.
>
> For example, it means that you cannot ever widen stores unless you
> know that the data you are touching is thread-local. Because the bytes
> *next* to you may not be things that you control.
>
> It also *should* mean that a language must never *ever* rematerialize
> memory accesses (again, unless thread-local).
>
> Seriously - I consider memory access rematerialization a huge bug, and
> both a security and correctness issue. I think it should be expressly
> forbidden in *any* language that claims to be reliablel.
> Rematerialization of memory accesses is a bug, and is *hugely* visible
> in the end result. It introduces active security issues and makes
> TOCTOU (Time-of-check to time-of-use) a much bigger problem than it
> needs to be.
Rematerialization or widening is essentially forbidden by
the C++ / C memory model.
>
> So memory accesses need to be part of the "visible" rules.
>
> I claim that C got that right with "volatile". What C got wrong was to
> move away from that concept, and _only_ have "volatile" defined in
> those terms. Because "volatile" on its own is not very good (and that
> "not very good" has nothing to do with the mess that C++ made of it).
I don't get your point. The compiler needs to preserve
observable behavior (which includes volatile accesses), while
the concurrency model is defined in terms of visibility of
stored values as seen by loads from other threads. This
visibility does not imply observable behavior, so all non-volatile
accesses do not have to be preserved by optimizations. Still this
model fundamentally constrains the optimization, e.g. by ruling
out the widening stores you mention above. I think this is
basically how this *has* to work, or at least I do not see how
this can be done differently.
I think C++ messed up a lot (including time-travel UB, uninitialized
variables, aliasing ules and much more), but I do not see
the problem here.
Martin
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 19:23 ` Ralf Jung
@ 2025-02-26 20:22 ` Martin Uecker
0 siblings, 0 replies; 194+ messages in thread
From: Martin Uecker @ 2025-02-26 20:22 UTC (permalink / raw)
To: Ralf Jung, Ventura Jack
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
Am Mittwoch, dem 26.02.2025 um 20:23 +0100 schrieb Ralf Jung:
> Hi all,
>
> > > > But it is much more significant for Rust than for C, at least in
> > > > regards to C's "restrict", since "restrict" is rarely used in C, while
> > > > aliasing optimizations are pervasive in Rust. For C's "strict aliasing",
> > > > I think you have a good point, but "strict aliasing" is still easier to
> > > > reason about in my opinion than C's "restrict". Especially if you
> > > > never have any type casts of any kind nor union type punning.
> > >
> > > Is it easier to reason about? At least GCC got it wrong, making no-aliasing
> > > assumptions that are not justified by most people's interpretation of the model:
> > > https://bugs.llvm.org/show_bug.cgi?id=21725
> > > (But yes that does involve unions.)
> >
> > Did you mean to say LLVM got this wrong? As far as I know,
> > the GCC TBBA code is more correct than LLVMs. It gets
> > type-changing stores correct that LLVM does not implement.
>
> Oh sorry, yes that is an LLVM bug link. I mixed something up. I could have sworn
> there was a GCC bug, but I only found
> <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=57359> which has been fixed.
> There was some problem with strong updates, i.e. the standard permits writes
> through a `float*` pointer to memory that aliases an `int*`. The C aliasing
> model only says it is UB to read data at the wrong type, but does not talk about
> writes changing the type of memory.
> Martin, maybe you remember better than me what that issue was / whether it is
> still a problem?
There are plenty of problems ;-) But GCC mostly gets the type-changing
stores correct as specified in the C standard. The bugs related to this
that I tracked got fixed. Clang still does not implement this as specified.
It implements the C++ model which does not require type-changing stores
to work (but I am not an expert on the C++ side). To be fair, there
was also incorrect guidance from WG14 at some point that added to the
confusion.
So I think for C one could use GCC with strict aliasing if one is careful
and observes the usual rules, but I would certainly recommend against
doing this for Clang.
What both compilers still get wrong are all the corner cases related to
provenance including the integer-pointer roundtrips.
The LLVM maintainer said they are going to fix the later soon, so
there is some hope on this side.
>
> > > > > So, the situation for Rust here is a lot better than it is in C. Unfortunately,
> > > > > running kernel code in Miri is not currently possible; figuring out how to
> > > > > improve that could be an interesting collaboration.
> > > >
> > > > I do not believe that you are correct when you write:
> > > >
> > > > "Unlike sanitizers, Miri can actually catch everything."
> > > >
> > > > Critically and very importantly, unless I am mistaken about MIRI, and
> > > > similar to sanitizers, MIRI only checks with runtime tests. That means
> > > > that MIRI will not catch any undefined behavior that a test does
> > > > not encounter. If a project's test coverage is poor, MIRI will not
> > > > check a lot of the code when run with those tests. Please do
> > > > correct me if I am mistaken about this. I am guessing that you
> > > > meant this as well, but I do not get the impression that it is
> > > > clear from your post.
> > >
> > > Okay, I may have misunderstood what you mean by "catch everything". All
> > > sanitizers miss some UB that actually occurs in the given execution. This is
> > > because they are inserted in the pipeline after a bunch of compiler-specific
> > > choices have already been made, potentially masking some UB. I'm not aware of a
> > > sanitizer for sequence point violations. I am not aware of a sanitizer for
> > > strict aliasing or restrict. I am not aware of a sanitizer that detects UB due
> > > to out-of-bounds pointer arithmetic (I am not talking about OOB accesses; just
> > > the arithmetic is already UB), or UB due to violations of "pointer lifetime end
> > > zapping", or UB due to comparing pointers derived from different allocations. Is
> > > there a sanitizer that correctly models what exactly happens when a struct with
> > > padding gets copied? The padding must be reset to be considered "uninitialized",
> > > even if the entire struct was zero-initialized before. Most compilers implement
> > > such a copy as memcpy; a sanitizer would then miss this UB.
> >
> > Note that reading padding bytes in C is not UB. Regarding
> > uninitialized variables, only automatic variables whose address
> > is not taken is UB in C. Although I suspect that compilers
> > have compliance isues here.
>
> Hm, now I am wondering how clang is compliant here. To my knowledge, padding is
> effectively reset to poison or undef on a copy (due to SROA), and clang marks
> most integer types as "noundef", thus making it UB to ever have undef/poison in
> such a value.
I haven't kept track with this, but I also do not believe that
Clang is conforming to the C standard, but again follows C++ rules
which has more UB. I am also not entirely sure GCC gets this
completely right though.
Martin
>
> Kind regards,
> Ralf
>
> >
> > But yes, it sanitizers are still rather poor.
>
>
>
> >
> > Martin
> >
> > >
> > > In contrast, Miri checks for all the UB that is used anywhere in the Rust
> > > compiler -- everything else would be a critical bug in either Miri or the compiler.
> > > But yes, it only does so on the code paths you are actually testing. And yes, it
> > > is very slow.
> > >
> > > Kind regards,
> > > Ralf
> > >
> >
>
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 17:59 ` Linus Torvalds
2025-02-26 19:01 ` Paul E. McKenney
2025-02-26 20:00 ` Martin Uecker
@ 2025-02-26 20:25 ` Kent Overstreet
2025-02-26 20:34 ` Andy Lutomirski
2025-02-26 22:45 ` David Laight
3 siblings, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-26 20:25 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ralf Jung, Alice Ryhl, Ventura Jack, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On Wed, Feb 26, 2025 at 09:59:41AM -0800, Linus Torvalds wrote:
> And just as an example: threading fundamentally introduces a notion of
> "aliasing" because different *threads* can access the same location
> concurrently. And that actually has real effects that a good language
> absolutely needs to deal with, even when there is absolutely *no*
> memory ordering or locking in the source code.
>
> For example, it means that you cannot ever widen stores unless you
> know that the data you are touching is thread-local. Because the bytes
> *next* to you may not be things that you control.
In Rust, W^X references mean you know that if you're writing to an
object you've got exclusive access - the exception being across an
UnsafeCell boundary, that's where you can't widen stores.
Which means all those old problems with bitfields go away, and the
compiler people finally know what they can safely do - and we have to
properly annotate access from multiple threads.
E.g. if you're doing a ringbuffer with head and tail pointers shared
between multiple threads, you no longer do that with bare integers, you
use atomics (even if you're not actually using any atomic operations on
them).
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 20:25 ` Kent Overstreet
@ 2025-02-26 20:34 ` Andy Lutomirski
0 siblings, 0 replies; 194+ messages in thread
From: Andy Lutomirski @ 2025-02-26 20:34 UTC (permalink / raw)
To: Kent Overstreet
Cc: Linus Torvalds, Ralf Jung, Alice Ryhl, Ventura Jack, Gary Guo,
airlied, boqun.feng, david.laight.linux, ej, gregkh, hch, hpa,
ksummit, linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On Wed, Feb 26, 2025 at 12:27 PM Kent Overstreet
<kent.overstreet@linux.dev> wrote:
> E.g. if you're doing a ringbuffer with head and tail pointers shared
> between multiple threads, you no longer do that with bare integers, you
> use atomics (even if you're not actually using any atomic operations on
> them).
>
FWIW, as far as I'm concerned, this isn't Rust-specific at all. In my
(non-Linux-kernel) C++ code, if I type "int", I mean an int that
follows normal C++ rules and I promise that I won't introduce a data
race. (And yes, I dislike the normal C++ rules and the complete lack
of language-enforced safety here as much as the next person.) If I
actually mean "a location in memory that contains int and that I
intend to manage on my own", like what "volatile int" sort of used to
mean, I type "atomic<int>". And I like this a *lot* more than I ever
liked volatile. With volatile int, it's very very easy to forget that
using it as an rvalue is a read (to the extent this is true under
various compilers). With atomic<int>, the language forces [0] me to
type what I actually mean, and I type foo->load().
I consider this to be such an improvement that I actually went through
and converted a bunch of code that predated C++ atomics and used
volatile over to std::atomic. Good riddance.
(For code that doesn't want to modify the data structures in question,
C++ has atomic_ref, which I think would make for a nicer
READ_ONCE-like operation without the keyword volatile appearing
anywhere including the macro expansion.)
[0] Okay, C++ actually gets this wrong IMO, because atomic::operator
T() exists. But that doesn't mean I'm obligated to use it.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 20:00 ` Martin Uecker
@ 2025-02-26 21:14 ` Linus Torvalds
2025-02-26 21:21 ` Linus Torvalds
` (3 more replies)
2025-02-27 14:21 ` Ventura Jack
2025-02-28 8:08 ` Ralf Jung
2 siblings, 4 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-26 21:14 UTC (permalink / raw)
To: Martin Uecker
Cc: Ralf Jung, Paul E. McKenney, Alice Ryhl, Ventura Jack,
Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 at 12:00, Martin Uecker <uecker@tugraz.at> wrote:
>
> The model is exactly the same as in C. One defines "observable
> behavior" (to use C terminology) and compiler can do whatever it
> wants as long as it preserves this.
The problem really is that memory accesses (outside of volatile, which
is defined to be a side effect) aren't actually defined to be
observable.
Yes, yes, the standard _allows_ that behavior, and even hass language
to that effect ("The keyword volatile would then be redundant"), but
nobody ever does that (and honestly, treating all memory accesses as
volatile would be insane).
> As Ralf said, the difference is that Rust makes it much harder to
> accidentally trigger UB.
Yes, but "accidental" is easy - unless the compiler warns about it.
That's why I basically asked for "either warn about UB, or define the
UB do the 'naive' thing".
So this is literally the problem I'm trying to bring up: "aliasing" is
defined to be UD _and_ the memory accesses are not defined to be
observable in themselves, so a C compiler can take those two things
and then say "you get random output".
THAT is what I am asking you to consider.
Pointing to the C standard doesn't help. The C standard GOT THIS WRONG.
And yes, part of getting it wrong is that the standard was written at
a time when threading wasn't a prime thing. So it was somewhat
reasonable to claim that memory accesses weren't "observable".
But dammit, doing things like "read the same variable twice even
though the programmer only read it once" *IS* observable! It's
observable as an actual security issue when it causes TOCTOU behavior
that was introduced into the program by the compiler.
So I claimed above that treating all memory accesses as volatile would
be insane. But I do claim that all memory accesses should be treated
as "USE the value of a read or write AT MOST as many times as the
source code said".
IOW, doing CSE on reads - and combining writes - when there aren't any
aliasing issues (or when there aren't any memory ordering issues)
should absolutely be considered ok.
And doing speculative reads - even if you then don't use the value -
is also entirely fine. You didn't introduce any observable behavior
difference (we'll agree to dismiss cache footprint issues).
But if the source code has sa single write, implementing it as two
writes (overwriting the first one) IS A BUG. It damn well is visible
behavior, and even the C standards committee has agreed on that
eventually.
Similarly, if the source code has a single read, the compiler had
better not turn that into two reads (because of some register pressure
issue). That would *ALSO* be a bug, because of the whole TOCTOU issue
(ie the source code may have had one single access, done sanity
testing on the value before using it, and if the compiler turned it
all into "read+sanity test" and "read+use", the compiler is
introducing behavioral differences).
That "single read done as multiple reads" is sadly still accepted by
the C standard, as far as I can tell. Because the standard still
considers it "unobservable" unless I've missed some update.
Please do better than that.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 21:14 ` Linus Torvalds
@ 2025-02-26 21:21 ` Linus Torvalds
2025-02-26 22:54 ` David Laight
2025-02-26 21:26 ` Steven Rostedt
` (2 subsequent siblings)
3 siblings, 1 reply; 194+ messages in thread
From: Linus Torvalds @ 2025-02-26 21:21 UTC (permalink / raw)
To: Martin Uecker
Cc: Ralf Jung, Paul E. McKenney, Alice Ryhl, Ventura Jack,
Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 at 13:14, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> That "single read done as multiple reads" is sadly still accepted by
> the C standard, as far as I can tell. Because the standard still
> considers it "unobservable" unless I've missed some update.
I want to clarify that I'm talking about perfectly normal and entirely
unannotated variable accesses.
Don't say "programmers should annotate their special accesses with
volatile if they want to avoid compiler-introduced TOCTOU issues".
Having humans have to work around failures in the language is not the way to go.
Particularly when there isn't even any advantage to it. I'm pretty
sure neither clang nor gcc actually rematerialize reads from memory,
but in the kernel we have *way* too many "READ_ONCE()" annotations
only because of various UBSAN-generated reports because our tooling
points the reads out as undefined if you don't do that.
In other words, we actively pessimize code generation *and* we spend
unnecessary human effort on working around an issue that comes purely
from a bad C standard, and tooling that worries about it.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 21:14 ` Linus Torvalds
2025-02-26 21:21 ` Linus Torvalds
@ 2025-02-26 21:26 ` Steven Rostedt
2025-02-26 21:37 ` Steven Rostedt
2025-02-26 21:42 ` Linus Torvalds
2025-02-26 22:27 ` Kent Overstreet
2025-02-27 4:18 ` Martin Uecker
3 siblings, 2 replies; 194+ messages in thread
From: Steven Rostedt @ 2025-02-26 21:26 UTC (permalink / raw)
To: Linus Torvalds
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 13:14:30 -0800
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> Similarly, if the source code has a single read, the compiler had
> better not turn that into two reads (because of some register pressure
> issue). That would *ALSO* be a bug, because of the whole TOCTOU issue
> (ie the source code may have had one single access, done sanity
> testing on the value before using it, and if the compiler turned it
> all into "read+sanity test" and "read+use", the compiler is
> introducing behavioral differences).
As a bystander here, I just want to ask, do you mean basically to treat all
reads as READ_ONCE() and all writes as WRITE_ONCE()?
-- Steve
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 21:26 ` Steven Rostedt
@ 2025-02-26 21:37 ` Steven Rostedt
2025-02-26 21:42 ` Linus Torvalds
1 sibling, 0 replies; 194+ messages in thread
From: Steven Rostedt @ 2025-02-26 21:37 UTC (permalink / raw)
To: Linus Torvalds
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 16:26:55 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> As a bystander here, I just want to ask, do you mean basically to treat all
> reads as READ_ONCE() and all writes as WRITE_ONCE()?
Never mind, your reply to yourself answers that.
-- Steve
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 16:50 ` Ventura Jack
@ 2025-02-26 21:39 ` Ralf Jung
2025-02-27 15:11 ` Ventura Jack
0 siblings, 1 reply; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 21:39 UTC (permalink / raw)
To: Ventura Jack
Cc: Alice Ryhl, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
Hi,
>> Yes. There's various projects, from bounded model checkers (Kani) that can
>> "only" statically guarantee "all executions that run loops at most N times are
>> fine" to full-fledged static verification tools (Gillian-Rust, VeriFast, Verus,
>> Prusti, RefinedRust -- just to mention the ones that support unsafe code). None
>> of the latter tools is production-ready yet, and some will always stay research
>> prototypes, but there's a lot of work going on, and having a precise model of
>> the entire Abstract Machine that is blessed by the compiler devs (i.e., Miri) is
>> a key part for this to work. It'll be even better when this Abstract Machine
>> exists not just implicitly in Miri but explicitly in a Rust Specification, and
>> is subject to stability guarantees -- and we'll get there, but it'll take some
>> more time. :)
>>
>> Kind regards,
>> Ralf
>>
>
> Thank you for the answer. Almost all of those projects look active,
> though Prusti's GitHub repository has not had commit activity for many
> months. Do you know if any of the projects are using stacked borrows
> or tree borrows yet? Gillian-Rust does not seem to use stacked borrows
> or tree borrows. Verus mentions stacked borrows in "related work" in
> one paper.
VeriFast people are working on Tree Borrows integration, and Gillian-Rust people
also have some plans if I remember correctly. For the rest, I am not aware of
plans, but that doesn't mean there aren't any. :)
> On the other hand, RefinedRust reuses code from Miri.
No, it does not use code from Miri, it is based on RustBelt -- my PhD thesis
where I formalized a (rather abstract) version of the borrow checker in Coq/Rocq
(i.e., in a tool for machine-checked proofs) and manually proved some pieces of
small but tricky unsafe code to be sound.
> It does sound exciting. It reminds me in some ways of Scala. Though
> also like advanced research where some practical goals for the
> language (Rust) have not yet been reached.
Yeah it's all very much work-in-progress research largely driven by small
academic groups, and at some point industry collaboration will become crucial to
actually turn these into usable products, but there's at least a lot of exciting
starting points. :)
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 21:26 ` Steven Rostedt
2025-02-26 21:37 ` Steven Rostedt
@ 2025-02-26 21:42 ` Linus Torvalds
2025-02-26 21:56 ` Steven Rostedt
1 sibling, 1 reply; 194+ messages in thread
From: Linus Torvalds @ 2025-02-26 21:42 UTC (permalink / raw)
To: Steven Rostedt
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 at 13:26, Steven Rostedt <rostedt@goodmis.org> wrote:
>
> As a bystander here, I just want to ask, do you mean basically to treat all
> reads as READ_ONCE() and all writes as WRITE_ONCE()?
Absolutely not.
I thought I made that clear:
"IOW, doing CSE on reads - and combining writes - when there aren't any
aliasing issues (or when there aren't any memory ordering issues)
should absolutely be considered ok.
And doing speculative reads - even if you then don't use the value -
is also entirely fine. You didn't introduce any observable behavior
difference (we'll agree to dismiss cache footprint issues)"
all of those basic optimizations would be wrong for 'volatile'.
You can't speculatively read a volatile, you can't combine two (or
more - often *many* more) reads, and you can't combine writes.
Doing basic CSE is a core compiler optimization, and I'm not at all
saying that shouldn't be done.
But re-materialization of memory accesses is wrong. Turning one load
into two loads is not an optimization, it's the opposite - and it is
also semantically visible.
And I'm saying that we in the kernel have then been forced to use
READ_ONCE() and WRITE_ONCE() unnecessarily, because people worry about
compilers doing these invalid optimizations, because the standard
allows that crap.
I'm hoping Rust can get this right.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 21:42 ` Linus Torvalds
@ 2025-02-26 21:56 ` Steven Rostedt
2025-02-26 22:13 ` Steven Rostedt
0 siblings, 1 reply; 194+ messages in thread
From: Steven Rostedt @ 2025-02-26 21:56 UTC (permalink / raw)
To: Linus Torvalds
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 13:42:29 -0800
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Wed, 26 Feb 2025 at 13:26, Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> > As a bystander here, I just want to ask, do you mean basically to treat all
> > reads as READ_ONCE() and all writes as WRITE_ONCE()?
>
> Absolutely not.
>
> I thought I made that clear:
Sorry, I didn't make myself clear. I shouldn't have said "all reads". What
I meant was the the "initial read".
Basically:
r = READ_ONCE(*p);
and use what 'r' is from then on.
Where the compiler reads the source once and works with what it got.
To keep it from changing:
r = *p;
if (r > 1000)
goto out;
x = r;
to:
if (*p > 1000)
goto out;
x = *p;
-- Steve
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 17:47 ` Steven Rostedt
@ 2025-02-26 22:07 ` Josh Poimboeuf
2025-03-02 12:19 ` David Laight
1 sibling, 0 replies; 194+ messages in thread
From: Josh Poimboeuf @ 2025-02-26 22:07 UTC (permalink / raw)
To: Steven Rostedt
Cc: Kent Overstreet, James Bottomley, Greg KH, Miguel Ojeda,
Ventura Jack, H. Peter Anvin, Alice Ryhl, Linus Torvalds,
Gary Guo, airlied, boqun.feng, david.laight.linux, hch, ksummit,
linux-kernel, rust-for-linux, Ralf Jung, Peter Zijlstra
On Wed, Feb 26, 2025 at 12:47:33PM -0500, Steven Rostedt wrote:
> On Wed, 26 Feb 2025 12:41:30 -0500
> Kent Overstreet <kent.overstreet@linux.dev> wrote:
>
> > It's been awhile since I've looked at one, I've been just automatically
> > switching back to frame pointers for awhile, but - I never saw
> > inaccurate backtraces, just failure to generate a backtrace - if memory
> > serves.
>
> OK, maybe if the bug was bad enough, it couldn't get access to the ORC
> tables for some reason.
ORC has been rock solid for many years, even for oopses. Even if it
were to fail during an oops for some highly unlikely reason, it falls
back to the "guess" unwind which shows all the kernel text addresses on
the stack.
The only known thing that will break ORC is if objtool warnings are
ignored. (BTW those will soon be upgraded to build errors by default)
ORC also gives nice clean stack traces through interrupts and
exceptions. Frame pointers *try* to do that, but for async code flows
that's very much a best effort type thing.
So on x86-64, frame pointers are very much deprecated. In fact we've
talked about removing the FP unwinder as there's no reason to use it
anymore. Objtool is always enabled by default anyway.
--
Josh
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 21:56 ` Steven Rostedt
@ 2025-02-26 22:13 ` Steven Rostedt
2025-02-26 22:22 ` Linus Torvalds
0 siblings, 1 reply; 194+ messages in thread
From: Steven Rostedt @ 2025-02-26 22:13 UTC (permalink / raw)
To: Linus Torvalds
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 16:56:19 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> r = *p;
> if (r > 1000)
> goto out;
> x = r;
>
> to:
>
> if (*p > 1000)
> goto out;
> x = *p;
And you could replace *p with any variable that is visible outside the
function. As that's where I have to remember to use READ_ONCE() all the
time. When I need to access a variable that may change, but the old value
may still be fine to use as long as it is consistent.
I take this is what you meant by following what the code does.
r = global;
if (r > 1000)
goto out;
x = r;
Is the code saying to read "global" once. But today the compiler may not do
that and we have to use READ_ONCE() to prevent it.
But if I used:
if (global > 1000)
goto out;
x = global;
Then the code itself is saying it is fine to re-read global or not, and the
compiler is fine with converting that to:
r = global;
if (r > 1000)
goto out;
x = r;
I guess this is where you say "volatile" is too strong, as this isn't an
issue and is an optimization the compiler can do. Where as the former
(reading global twice) is a bug because the code did not explicitly state
to do that.
-- Steve
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 22:13 ` Steven Rostedt
@ 2025-02-26 22:22 ` Linus Torvalds
2025-02-26 22:35 ` Steven Rostedt
0 siblings, 1 reply; 194+ messages in thread
From: Linus Torvalds @ 2025-02-26 22:22 UTC (permalink / raw)
To: Steven Rostedt
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 at 14:12, Steven Rostedt <rostedt@goodmis.org> wrote:
>
> I take this is what you meant by following what the code does.
>
> r = global;
> if (r > 1000)
> goto out;
> x = r;
>
> Is the code saying to read "global" once. But today the compiler may not do
> that and we have to use READ_ONCE() to prevent it.
Exactly.
And as mentioned, as far as I actually know, neither clang nor gcc
will actually screw it up.
But the C standard *allows* the compiler to basically turn the above into:
> But if I used:
>
> if (global > 1000)
> goto out;
> x = global;
which can have the TUCTOU issue because 'global' is read twice.
> I guess this is where you say "volatile" is too strong, as this isn't an
> issue and is an optimization the compiler can do.
Yes. 'volatile' is horrendous. It was designed for MMIO, not for
memory, and it shows.
Now, in the kernel we obviously use volatile for MMIO too, and in the
context of that (ie 'readl()' and 'writel()') it's doing pretty much
exactly what it should do.
But in the kernel, when we use 'READ_ONCE()', we basically almost
always actually mean "READ_AT_MOST_ONCE()". It's not that we
necessarily need *exactly* once, but we require that we get one single
stable value).
(And same for WRITE_ONCE()).
We also have worried about access tearing issues, so
READ_ONCE/WRITE_ONCE also check that it's an atomic type etc, so it's
not *purely* about the "no rematerialization" kinds of issues. Again,
those aren't actually necessarily things compilers get wrong, but they
are things that the standard is silent on.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 21:14 ` Linus Torvalds
2025-02-26 21:21 ` Linus Torvalds
2025-02-26 21:26 ` Steven Rostedt
@ 2025-02-26 22:27 ` Kent Overstreet
2025-02-26 23:16 ` Linus Torvalds
2025-02-27 4:18 ` Martin Uecker
3 siblings, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-26 22:27 UTC (permalink / raw)
To: Linus Torvalds
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, Feb 26, 2025 at 01:14:30PM -0800, Linus Torvalds wrote:
> But dammit, doing things like "read the same variable twice even
> though the programmer only read it once" *IS* observable! It's
> observable as an actual security issue when it causes TOCTOU behavior
> that was introduced into the program by the compiler.
This is another one that's entirely eliminated due to W^X references.
IOW: if you're writing code where rematerializing reads is even a
_concern_ in Rust, then you had to drop to unsafe {} to do it - and your
code is broken, and yes it will have UB.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 18:09 ` Ventura Jack
@ 2025-02-26 22:28 ` Ralf Jung
2025-02-26 23:08 ` David Laight
2025-02-27 17:33 ` Ventura Jack
0 siblings, 2 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-26 22:28 UTC (permalink / raw)
To: Ventura Jack
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
Hi all,
On 26.02.25 19:09, Ventura Jack wrote:
> On Wed, Feb 26, 2025 at 9:32 AM Ralf Jung <post@ralfj.de> wrote:
>>
>> Hi VJ,
>>
>>>>
>>>>> - Rust has not defined its aliasing model.
>>>>
>>>> Correct. But then, neither has C. The C aliasing rules are described in English
>>>> prose that is prone to ambiguities and misintepretation. The strict aliasing
>>>> analysis implemented in GCC is not compatible with how most people read the
>>>> standard (https://bugs.llvm.org/show_bug.cgi?id=21725). There is no tool to
>>>> check whether code follows the C aliasing rules, and due to the aforementioned
>>>> ambiguities it would be hard to write such a tool and be sure it interprets the
>>>> standard the same way compilers do.
>>>>
>>>> For Rust, we at least have two candidate models that are defined in full
>>>> mathematical rigor, and a tool that is widely used in the community, ensuring
>>>> the models match realistic use of Rust.
>>>
>>> But it is much more significant for Rust than for C, at least in
>>> regards to C's "restrict", since "restrict" is rarely used in C, while
>>> aliasing optimizations are pervasive in Rust. For C's "strict aliasing",
>>> I think you have a good point, but "strict aliasing" is still easier to
>>> reason about in my opinion than C's "restrict". Especially if you
>>> never have any type casts of any kind nor union type punning.
>>
>> Is it easier to reason about? At least GCC got it wrong, making no-aliasing
>> assumptions that are not justified by most people's interpretation of the model:
>> https://bugs.llvm.org/show_bug.cgi?id=21725
>> (But yes that does involve unions.)
>
> For that specific bug issue, there is a GitHub issue for it.
>
> https://github.com/llvm/llvm-project/issues/22099
Yeah sorry this was an LLVM issue, not a GCC issue. I mixed things up.
> And the original test case appears to have been a compiler bug
> and have been fixed, at least when I run on Godbolt against
> a recent version of Clang. Another comment says.
>
> "The original testcase seems to be fixed now but replacing
> the union by allocated memory makes the problem come back."
>
> And the new test case the user mentions involves a void pointer.
>
> I wonder if they could close the issue and open a new issue
> in its stead that only contains the currently relevant compiler
> bugs if there are any. And have this new issue refer to the old
> issue. They brought the old issue over from the old bug tracker.
> But I do not have a good handle on that issue.
>
> Unions in C, C++ and Rust (not Rust "enum"/tagged union) are
> generally sharp. In Rust, it requires unsafe Rust to read from
> a union.
Definitely sharp. At least in Rust we have a very clear specification though,
since we do allow arbitrary type punning -- you "just" reinterpret whatever
bytes are stored in the union, at whatever type you are reading things. There is
also no "active variant" or anything like that, you can use any variant at any
time, as long as the bytes are "valid" for the variant you are using. (So for
instance if you are trying to read a value 0x03 at type `bool`, that is UB.)
I think this means we have strictly less UB here than C or C++, removing as many
of the sharp edges as we can without impacting the rest of the language.
>> In contrast, Miri checks for all the UB that is used anywhere in the Rust
>> compiler -- everything else would be a critical bug in either Miri or the compiler.
>> But yes, it only does so on the code paths you are actually testing. And yes, it
>> is very slow.
>
> I may have been ambiguous again, or unclear or misleading,
> I need to work on that.
>
> The description you have here indicates that Miri is in many ways
> significantly better than sanitizers in general.
>
> I think it is more accurate of me to say that Miri in some aspects
> shares some of the advantages and disadvantages of sanitizers,
> and in other aspects is much better than sanitizers.
I can agree with that. :)
> Is Miri the only one of its kind in the programming world?
> There are not many system languages in mass use, and
> those are the languages that first and foremost deal
> with undefined behavior. That would make Miri extra impressive.
I am not aware of a comparable tool that would be in wide-spread use, or that is
carefully aligned with the semantics of an actual compiler.
For C, there is Cerberus (https://www.cl.cam.ac.uk/~pes20/cerberus/) as an
executable version of the C specification, but it can only run tiny examples.
The verified CompCert compiler comes with a semantics one could interpret, but
that only checks code for compatibility with CompCert C, which has a lot less
(and a bit more) UB than real C.
There are also two efforts that turned into commercial tools that I have not
tried, and for which there is hardly any documentation of how they interpret the
C standard so it's not clear what a green light from them means when compiling
with gcc or clang. I also don't know how much real-world code they can actually run.
- TrustInSoft/tis-interpreter, mostly gone from the web but still available in
the wayback machine
(https://web.archive.org/web/20200804061411/https://github.com/TrustInSoft/tis-interpreter/);
I assume this got integrated into their "TrustInSoft Analyzer" product.
- kcc, a K-framework based formalization of C that is executable. The public
repo is dead (https://github.com/kframework/c-semantics) and when I tried to
build their tool that didn't work. The people behind this have a company that
offers "RV-Match" as a commercial product claiming to find bugs in C based on "a
complete formal ISO C11 semantics" so I guess that is where their efforts go now.
For C++ and Zig, I am not aware of anything comparable.
Part of the problem is that in C, 2 people will have 3 ideas for what the
standard means. Compiler writers and programmers regularly have wildly
conflicting ideas of what is and is not allowed. There are many different places
in the standard that have to be scanned to answer "is this well-defined" even
for very simple programs. (https://godbolt.org/z/rjaWc6EzG is one of my favorite
examples.) A tool can check a single well-defined semantics, but who gets to
decide what exactly those semantics are?
Formalizing the C standard requires extensive interpretation, so I am skeptical
of everyone who claims that they "formalized the C standard" and built a tool on
that without extensive evaluation of how their formalization compares to what
compilers do and what programmers rely on. The Cerberus people have done that
evaluation (see e.g. https://dl.acm.org/doi/10.1145/2980983.2908081), but none
of the other efforts have (to my knowledge). Ideally such a formalization effort
would be done in close collaboration with compiler authors and the committee so
that the ambiguities in the standard can be resolved and the formalization
becomes the one canonical interpretation. The Cerberus people are the ones that
pushed the C provenance formalization through, so they made great progress here.
However, many issues remain, some quite long-standing (e.g.
https://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_260.htm and
https://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_451.htm, which in my eyes
never got properly resolved by clarifying the standard). Martin and a few others
are slowly pushing things in the right direction, but it takes a long time.
Rust, by having a single project in charge of the one canonical implementation
and the specification, and having an open process that is well-suited for
incorporating user concerns, can move a lot quicker here. C has a huge
head-start, Rust has nothing like the C standard, but we are catching up -- and
our goal is more ambitious than that; we are doing our best to learn from C and
C++ and concluded that that style of specification is too prone to ambiguity, so
we are trying to achieve a formally precise unambiguous specification. Wasm
shows that this can be done, at industry scale, albeit for a small language --
time we do it for a large one. :)
So, yes I think Miri is fairly unique. But please let me know if I missed something!
(As an aside, the above hopefully also explains why some people in Rust are
concerned about alternative implementations. We do *not* want the current
de-factor behavior to ossify and become the specification. We do *not* want the
specification to just be a description of what the existing implementations at
the time happen to do, and declare all behavior differences to be UB or
unspecified or so just because no implementation is willing to adjust their
behavior to match the rest. We want the specification to be prescriptive, not
descriptive, and we will adjust the implementation as we see fit to achieve that
-- within the scope of Rust's stability guarantees. That's also why we are so
cagey about spelling out the aliasing rules until we are sure we have a good
enough model.)
> There are some issues in Rust that I am curious as to
> your views on. rustc or the Rust language has some type
> system holes, which still causes problems for rustc and
> their developers.
>
> https://github.com/lcnr/solver-woes/issues/1
> https://github.com/rust-lang/rust/issues/75992
>
> Those kinds of issues seem difficult to solve.
>
> In your opinion, is it accurate to say that the Rust language
> developers are working on a new type system for
> Rust-the-language and a new solver for rustc, and that
> they are trying to make the new type system and new solver
> as backwards compatible as possible?
It's not really a new type system. It's a new implementation for the same type
system. But yes there is work on a new "solver" (that I am not involved in) that
should finally fix some of the long-standing type system bugs. Specifically,
this is a "trait solver", i.e. it is the component responsible for dealing with
trait constraints. Due to some unfortunate corner-case behaviors of the old,
organically grown solver, it's very hard to do this in a backwards-compatible
way, but we have infrastructure for extensive ecosystem-wide testing to judge
the consequences of any given potential breaking change and ensure that almost
all existing code keeps working. In fact, Rust 1.84 already started using the
new solver for some things
(https://blog.rust-lang.org/2025/01/09/Rust-1.84.0.html) -- did you notice?
Hopefully not. :)
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 22:22 ` Linus Torvalds
@ 2025-02-26 22:35 ` Steven Rostedt
2025-02-26 23:18 ` Linus Torvalds
2025-02-27 20:47 ` David Laight
0 siblings, 2 replies; 194+ messages in thread
From: Steven Rostedt @ 2025-02-26 22:35 UTC (permalink / raw)
To: Linus Torvalds
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 14:22:26 -0800
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> > But if I used:
> >
> > if (global > 1000)
> > goto out;
> > x = global;
>
> which can have the TUCTOU issue because 'global' is read twice.
Correct, but if the variable had some other protection, like a lock held
when this function was called, it is fine to do and the compiler may
optimize it or not and still have the same result.
I guess you can sum this up to:
The compiler should never assume it's safe to read a global more than the
code specifies, but if the code reads a global more than once, it's fine
to cache the multiple reads.
Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
And when I do use it, it is more to prevent write tearing as you mentioned.
-- Steve
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 17:59 ` Linus Torvalds
` (2 preceding siblings ...)
2025-02-26 20:25 ` Kent Overstreet
@ 2025-02-26 22:45 ` David Laight
3 siblings, 0 replies; 194+ messages in thread
From: David Laight @ 2025-02-26 22:45 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ralf Jung, Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo,
airlied, boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 09:59:41 -0800
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Wed, 26 Feb 2025 at 05:54, Ralf Jung <post@ralfj.de> wrote:
> >
> > The only approach we know that we can actually
> > pull through systematically (in the sense of "at least in principle, we can
> > formally prove this correct") is to define the "visible behavior" of the source
> > program, the "visible behavior" of the generated assembly, and promise that they
> > are the same.
>
> That's literally what I ask for with that "naive" code generation, you
> just stated it much better.
>
> I think some of the C standards problems came from the fact that at
> some point the standards people decided that the only way to specify
> the language was from a high-level language _syntax_ standpoint.
>
> Which is odd, because a lot of the original C semantics came from
> basically a "this is how the result works". It's where a lot of the
> historical C architecture-defined (and undefined) details come from:
> things like how integer division rounding happens, how shifts bigger
> than the word size are undefined, etc.
I'm pretty sure some things were 'undefined' to allow more unusual
cpu to be conformant.
So ones with saturating integer arithmetic, no arithmetic right shift,
only word addressing (etc) could still claim to be C.
There is also the NULL pointer not being the 'all zeros' pattern.
I don't think any C compiler has ever done that, but clang has started
complaining that maths with NULL is undefined because that is allowed.
Is it going to complain about memset() of structures containing pointers?
The other problem is that it says 'Undefined Behaviour' not 'undefined
result' or 'may trap'. UB includes 'erasing all the data on your disk'.
David
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 21:21 ` Linus Torvalds
@ 2025-02-26 22:54 ` David Laight
2025-02-27 0:35 ` Paul E. McKenney
0 siblings, 1 reply; 194+ messages in thread
From: David Laight @ 2025-02-26 22:54 UTC (permalink / raw)
To: Linus Torvalds
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng, ej,
gregkh, hch, hpa, ksummit, linux-kernel, miguel.ojeda.sandonis,
rust-for-linux
On Wed, 26 Feb 2025 13:21:41 -0800
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Wed, 26 Feb 2025 at 13:14, Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> >
> > That "single read done as multiple reads" is sadly still accepted by
> > the C standard, as far as I can tell. Because the standard still
> > considers it "unobservable" unless I've missed some update.
>
> I want to clarify that I'm talking about perfectly normal and entirely
> unannotated variable accesses.
>
> Don't say "programmers should annotate their special accesses with
> volatile if they want to avoid compiler-introduced TOCTOU issues".
>
> Having humans have to work around failures in the language is not the way to go.
>
> Particularly when there isn't even any advantage to it. I'm pretty
> sure neither clang nor gcc actually rematerialize reads from memory,
I thought some of the very early READ_ONCE() were added because there
was an actual problem with the generated code.
But it has got entirely silly.
In many cases gcc will generate an extra register-register transfer
for a volatile read - I've seen it do a byte read, register move and
then and with 0xff.
I think adding a separate memory barrier would stop the read being
rematerialized - but you also need to stop it doing (for example)
two byte accesses for a 16bit variable - arm32 has a limited offset
for 16bit memory accesses, so the compiler might be tempted to do
two byte writes.
David
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 22:28 ` Ralf Jung
@ 2025-02-26 23:08 ` David Laight
2025-02-27 13:55 ` Ralf Jung
2025-02-27 17:33 ` Ventura Jack
1 sibling, 1 reply; 194+ messages in thread
From: David Laight @ 2025-02-26 23:08 UTC (permalink / raw)
To: Ralf Jung
Cc: Ventura Jack, Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds,
airlied, boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
rust-for-linux
On Wed, 26 Feb 2025 23:28:20 +0100
Ralf Jung <post@ralfj.de> wrote:
...
> > Unions in C, C++ and Rust (not Rust "enum"/tagged union) are
> > generally sharp. In Rust, it requires unsafe Rust to read from
> > a union.
>
> Definitely sharp. At least in Rust we have a very clear specification though,
> since we do allow arbitrary type punning -- you "just" reinterpret whatever
> bytes are stored in the union, at whatever type you are reading things. There is
> also no "active variant" or anything like that, you can use any variant at any
> time, as long as the bytes are "valid" for the variant you are using. (So for
> instance if you are trying to read a value 0x03 at type `bool`, that is UB.)
That is actually a big f***ing problem.
The language has to define the exact behaviour when 'bool' doesn't contain
0 or 1.
Much the same as the function call interface defines whether it is the caller
or called code is responsible for masking the high bits of a register that
contains a 'char' type.
Now the answer could be that 'and' is (or may be) a bit-wise operation.
But that isn't UB, just an undefined/unexpected result.
I've actually no idea if/when current gcc 'sanitises' bool values.
A very old version used to generate really crap code (and I mean REALLY)
because it repeatedly sanitised the values.
But IMHO bool just shouldn't exist, it isn't a hardware type and is actually
expensive to get right.
If you use 'int' with zero meaning false there is pretty much no ambiguity.
David
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 22:27 ` Kent Overstreet
@ 2025-02-26 23:16 ` Linus Torvalds
2025-02-27 0:17 ` Kent Overstreet
` (3 more replies)
0 siblings, 4 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-26 23:16 UTC (permalink / raw)
To: Kent Overstreet
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 at 14:27, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>
> This is another one that's entirely eliminated due to W^X references.
Are you saying rust cannot have global flags?
That seems unlikely. And broken if so.
> IOW: if you're writing code where rematerializing reads is even a
> _concern_ in Rust, then you had to drop to unsafe {} to do it - and your
> code is broken, and yes it will have UB.
If you need to drop to unsafe mode just to read a global flag that may
be set concurrently, you're doing something wrong as a language
designer.
And if your language then rematerializes reads, the language is shit.
Really.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 22:35 ` Steven Rostedt
@ 2025-02-26 23:18 ` Linus Torvalds
2025-02-26 23:28 ` Steven Rostedt
2025-02-27 20:47 ` David Laight
1 sibling, 1 reply; 194+ messages in thread
From: Linus Torvalds @ 2025-02-26 23:18 UTC (permalink / raw)
To: Steven Rostedt
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 at 14:34, Steven Rostedt <rostedt@goodmis.org> wrote:
>
> Correct, but if the variable had some other protection, like a lock held
> when this function was called, it is fine to do and the compiler may
> optimize it or not and still have the same result.
Sure.
But locking isn't always there. And shouldn't always be there. Lots of
lockless algorithms exist, and some of them are very simple indeed ("I
set a flag, you read a flag, you get one or the other value")
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 23:18 ` Linus Torvalds
@ 2025-02-26 23:28 ` Steven Rostedt
2025-02-27 0:04 ` Linus Torvalds
0 siblings, 1 reply; 194+ messages in thread
From: Steven Rostedt @ 2025-02-26 23:28 UTC (permalink / raw)
To: Linus Torvalds
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 15:18:48 -0800
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Wed, 26 Feb 2025 at 14:34, Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> > Correct, but if the variable had some other protection, like a lock held
> > when this function was called, it is fine to do and the compiler may
> > optimize it or not and still have the same result.
>
> Sure.
>
> But locking isn't always there. And shouldn't always be there. Lots of
> lockless algorithms exist, and some of them are very simple indeed ("I
> set a flag, you read a flag, you get one or the other value")
Yes, for the case of:
r = READ_ONCE(global);
if (r > 1000)
goto out;
x = r;
As I've done that in my code without locks, as I just need a consistent
value not necessarily the "current" value.
I was talking for the case the code has (not the compiler creating):
if (global > 1000)
goto out;
x = global;
Because without a lock or some other protection, that's likely a bug.
My point is that the compiler is free to turn that into:
r = READ_ONCE(global);
if (r > 1000)
goto out;
x = r;
and not change the expected result.
-- Steve
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 23:28 ` Steven Rostedt
@ 2025-02-27 0:04 ` Linus Torvalds
0 siblings, 0 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-27 0:04 UTC (permalink / raw)
To: Steven Rostedt
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 at 15:27, Steven Rostedt <rostedt@goodmis.org> wrote:
>
> My point is that the compiler is free to turn that into:
>
> r = READ_ONCE(global);
> if (r > 1000)
> goto out;
> x = r;
>
> and not change the expected result.
Yes.
It is safe to *combine* reads - it's what the CPU will effectively do
anyway (modulo MMIO, which as mentioned is why volatile is so special
and so different).
It's just not safe to split them.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 23:16 ` Linus Torvalds
@ 2025-02-27 0:17 ` Kent Overstreet
2025-02-27 0:26 ` comex
` (2 subsequent siblings)
3 siblings, 0 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-27 0:17 UTC (permalink / raw)
To: Linus Torvalds
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, Feb 26, 2025 at 03:16:54PM -0800, Linus Torvalds wrote:
> On Wed, 26 Feb 2025 at 14:27, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> >
> > This is another one that's entirely eliminated due to W^X references.
>
> Are you saying rust cannot have global flags?
>
> That seems unlikely. And broken if so.
No, certainly not - but you _do_ have to denote the access rules, and
because of that they'll also need accessor functions.
e.g. in bcachefs, I've got a 'filesystem options' object. It's read
unsynchronized all over the place, and I don't care because the various
options don't have interdependencies - I don't care about ordering - and
they're all naturally aligned integers.
If/when that gets converted to Rust, it won't be a bare object anymore,
it'll be something that requires a .get() - and it has to be, because
this is something with interior mutability.
I couldn't tell you yet what container object we'd use for telling the
compiler "yes this is just bare unstructured integers, just wrap it for
me (and probably assert that we're not using anything to store more
complicated)" - but I can say that it'll be something with a getter that
uses UnsafeCell underneath.
I'd also have to dig around in the nomicon to say whether the compiler
barriers come from the UnsafeCell directly or whether it's the wrapper
object that does the unsafe {} bits that specifies them - or perhaps
someone in the thread will say, but somewhere underneath the getter will
be the compiler barrier you want.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 23:16 ` Linus Torvalds
2025-02-27 0:17 ` Kent Overstreet
@ 2025-02-27 0:26 ` comex
2025-02-27 18:33 ` Ralf Jung
2025-03-06 19:16 ` Ventura Jack
3 siblings, 0 replies; 194+ messages in thread
From: comex @ 2025-02-27 0:26 UTC (permalink / raw)
To: Linus Torvalds
Cc: Kent Overstreet, Martin Uecker, Ralf Jung, Paul E. McKenney,
Alice Ryhl, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
> On Feb 26, 2025, at 3:16 PM, Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
> On Wed, 26 Feb 2025 at 14:27, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>>
>> This is another one that's entirely eliminated due to W^X references.
>
> Are you saying rust cannot have global flags?
Believe it or not, no, it cannot.
All global variables must be either immutable, atomic, or protected with some sort of lock.
You can bypass this with unsafe code (UnsafeCell), but then you need to ensure no concurrent mutations for yourself, or else you get UB.
For a simple flag, you would probably use an atomic type with relaxed loads/stores. So you get the same load/store instructions as non-atomic accesses, but zero optimizations. And uglier syntax.
Personally I wish Rust had a weaker atomic ordering that did allow some optimizations, along with more syntax sugar for atomics. But in practice it’s really not a big deal, since use of mutable globals is discouraged in the first place.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 22:54 ` David Laight
@ 2025-02-27 0:35 ` Paul E. McKenney
0 siblings, 0 replies; 194+ messages in thread
From: Paul E. McKenney @ 2025-02-27 0:35 UTC (permalink / raw)
To: David Laight
Cc: Linus Torvalds, Martin Uecker, Ralf Jung, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng, ej,
gregkh, hch, hpa, ksummit, linux-kernel, miguel.ojeda.sandonis,
rust-for-linux
On Wed, Feb 26, 2025 at 10:54:12PM +0000, David Laight wrote:
> On Wed, 26 Feb 2025 13:21:41 -0800
> Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
> > On Wed, 26 Feb 2025 at 13:14, Linus Torvalds
> > <torvalds@linux-foundation.org> wrote:
> > >
> > > That "single read done as multiple reads" is sadly still accepted by
> > > the C standard, as far as I can tell. Because the standard still
> > > considers it "unobservable" unless I've missed some update.
> >
> > I want to clarify that I'm talking about perfectly normal and entirely
> > unannotated variable accesses.
> >
> > Don't say "programmers should annotate their special accesses with
> > volatile if they want to avoid compiler-introduced TOCTOU issues".
> >
> > Having humans have to work around failures in the language is not the way to go.
> >
> > Particularly when there isn't even any advantage to it. I'm pretty
> > sure neither clang nor gcc actually rematerialize reads from memory,
>
> I thought some of the very early READ_ONCE() were added because there
> was an actual problem with the generated code.
> But it has got entirely silly.
> In many cases gcc will generate an extra register-register transfer
> for a volatile read - I've seen it do a byte read, register move and
> then and with 0xff.
> I think adding a separate memory barrier would stop the read being
> rematerialized - but you also need to stop it doing (for example)
> two byte accesses for a 16bit variable - arm32 has a limited offset
> for 16bit memory accesses, so the compiler might be tempted to do
> two byte writes.
Perhaps some day GCC __atomic_load_n(__ATOMIC_RELAXED) will do what we
want for READ_ONCE(). Not holding my breath, though. ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 21:14 ` Linus Torvalds
` (2 preceding siblings ...)
2025-02-26 22:27 ` Kent Overstreet
@ 2025-02-27 4:18 ` Martin Uecker
2025-02-27 5:52 ` Linus Torvalds
3 siblings, 1 reply; 194+ messages in thread
From: Martin Uecker @ 2025-02-27 4:18 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ralf Jung, Paul E. McKenney, Alice Ryhl, Ventura Jack,
Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
Am Mittwoch, dem 26.02.2025 um 13:14 -0800 schrieb Linus Torvalds:
> On Wed, 26 Feb 2025 at 12:00, Martin Uecker <uecker@tugraz.at> wrote:
[...]
>
> That "single read done as multiple reads" is sadly still accepted by
> the C standard, as far as I can tell. Because the standard still
> considers it "unobservable" unless I've missed some update.
>
> Please do better than that.
This is not really related to "observable" but to visibility
of stores to other threads.
It sounds you want to see the semantics strengthened in case
of a data race from there being UB to having either the old
or new value being visible to another thread, where at some
point this could change but needs to be consistent for a
single access as expressed in the source code.
This does sound entirely resonable to me and if compilers
already do behave this way (though Paul's comment
seems to imply otherwise), then I think the standard
could easily be changed to ensure this. I do some work to
remove UB and I was already thinking about what could
be done here.
But somebody would have to do the work and propose this. *)
Such a change would need to come with a precise enough
explanation what needs to change and a clear rationale.
My guess is that if one could convince compiler people
- especially those from the clang side that are the most
critical in my experience - then such a proposal would
actually have a very good chance to be accepted.
There would certainly be opposition if this fundamentally
diverges from C++ because no compiler framework will seriously
consider implementing a completely different memory model
for C (or for Rust) than for C++.
I could also imagine that the problem here is that it is
actually very difficult for compilers to give the guarantess
you want, because they evolved from compilers
doing optimization for single threads and and one would
have to fix a lot of issues in the optimizers. So the
actually problem here might be that nobody wants to pay
for fixing the compilers.
Martin
*): https://www.open-std.org/jtc1/sc22/wg14/www/contributing.html
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 4:18 ` Martin Uecker
@ 2025-02-27 5:52 ` Linus Torvalds
2025-02-27 6:56 ` Martin Uecker
` (2 more replies)
0 siblings, 3 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-27 5:52 UTC (permalink / raw)
To: Martin Uecker
Cc: Ralf Jung, Paul E. McKenney, Alice Ryhl, Ventura Jack,
Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 at 20:18, Martin Uecker <uecker@tugraz.at> wrote:
>
> This is not really related to "observable" but to visibility
> of stores to other threads.
Yes? What's the difference?
Threading is a fundamental thing. It didn't *use* to be fundamental,
and yes, languages and CPU architectures were designed without taking
it into account.
But a language that was designed this century, wouldn't you agree that
threading is not something unusual or odd or should be an
after-thought, and something as basic as "observable" should take it
into account?
Also note that "visibility of stores to other threads" also does mean
that the loads in those other threads matter.
That's why rematerializing loads is wrong - the value in memory may
simply not be the same value any more, so a load that is
rematerialized is a bug.
> It sounds you want to see the semantics strengthened in case
> of a data race from there being UB to having either the old
> or new value being visible to another thread, where at some
> point this could change but needs to be consistent for a
> single access as expressed in the source code.
Absolutely.
And notice that in the non-UB case - ie when you can rely on locking
or other uniqueness guarantees - you can generate better code.
So "safe rust" should generally not be impacted, and you can make the
very true argument that safe rust can be optimized more aggressively
and migth be faster than unsafe rust.
And I think that should be seen as a feature, and as a basic tenet of
safe vs unsafe. A compiler *should* be able to do better when it
understands the code fully.
> There would certainly be opposition if this fundamentally
> diverges from C++ because no compiler framework will seriously
> consider implementing a completely different memory model
> for C (or for Rust) than for C++.
Well, if the C++ peoiple end up working on some "safe C" model, I bet
they'll face the same issues.
> I could also imagine that the problem here is that it is
> actually very difficult for compilers to give the guarantess
> you want, because they evolved from compilers
> doing optimization for single threads and and one would
> have to fix a lot of issues in the optimizers. So the
> actually problem here might be that nobody wants to pay
> for fixing the compilers.
I actually suspect that most of the work has already been done in practice.
As mentioned, some time ago I checked the whole issue of
rematerializing loads, and at least gcc doesn't rematerialize loads
(and I just double-checked: bad_for_rematerialization_p() returns true
for mem-ops)
I have this memory that people told me that clang similarly
And the C standards committee already made widening stores invalid due
to threading issues.
Are there other issues? Sure. But remat of memory loads is at least
one issue, and it's one that has been painful for the kernel - not
because compilers do it, but because we *fear* compilers doing it so
much.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 5:52 ` Linus Torvalds
@ 2025-02-27 6:56 ` Martin Uecker
2025-02-27 14:29 ` Steven Rostedt
2025-02-27 18:00 ` Ventura Jack
2025-02-27 18:44 ` Ralf Jung
2 siblings, 1 reply; 194+ messages in thread
From: Martin Uecker @ 2025-02-27 6:56 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ralf Jung, Paul E. McKenney, Alice Ryhl, Ventura Jack,
Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
Am Mittwoch, dem 26.02.2025 um 21:52 -0800 schrieb Linus Torvalds:
> On Wed, 26 Feb 2025 at 20:18, Martin Uecker <uecker@tugraz.at> wrote:
> >
> > This is not really related to "observable" but to visibility
> > of stores to other threads.
>
> Yes? What's the difference?
Observable is I/O and volatile accesses. These are things considered
observable from the outside of a process and the only things an
optimizer has to preserve.
Visibility is related to when stores are visible to other threads of
the same process. But this is just an internal concept to give
evaluation of expressions semantics in a multi-threaded
program when objects are accessed from different threads. But
the compiler is free to change any aspect of it, as long as the
observable behavior stays the same.
In practice the difference is not so big for a traditional
optimizer that only has a limited local view and where
"another thread" is basically part of the "outside world".
I personally would have tried to unify this more, but this
was a long time before I got involved in this.
>
> Threading is a fundamental thing. It didn't *use* to be fundamental,
> and yes, languages and CPU architectures were designed without taking
> it into account.
>
> But a language that was designed this century, wouldn't you agree that
> threading is not something unusual or odd or should be an
> after-thought, and something as basic as "observable" should take it
> into account?
>
> Also note that "visibility of stores to other threads" also does mean
> that the loads in those other threads matter.
I agree that this could have been done better. This was bolted
on retrospectively and in a non-optimal way.
>
> That's why rematerializing loads is wrong - the value in memory may
> simply not be the same value any more, so a load that is
> rematerialized is a bug.
I assume that compromises were made very deliberately
to require only limited changes to compilers designed for
optimizing single-threaded code. This could certainly be
reconsidered.
>
> > It sounds you want to see the semantics strengthened in case
> > of a data race from there being UB to having either the old
> > or new value being visible to another thread, where at some
> > point this could change but needs to be consistent for a
> > single access as expressed in the source code.
>
> Absolutely.
>
> And notice that in the non-UB case - ie when you can rely on locking
> or other uniqueness guarantees - you can generate better code.
A compiler would need to understand that certain objects are
only accessed when protected somehow. Currently this is
assumed for everything. If you want to strengthen semantics
for all regular memory accesses, but still allow more optimization
for certain objects, one would need to express this somehow,
e.g. that certain memory is protected by specific locks.
>
> So "safe rust" should generally not be impacted, and you can make the
> very true argument that safe rust can be optimized more aggressively
> and migth be faster than unsafe rust.
>
> And I think that should be seen as a feature, and as a basic tenet of
> safe vs unsafe. A compiler *should* be able to do better when it
> understands the code fully.
>
> > There would certainly be opposition if this fundamentally
> > diverges from C++ because no compiler framework will seriously
> > consider implementing a completely different memory model
> > for C (or for Rust) than for C++.
>
> Well, if the C++ peoiple end up working on some "safe C" model, I bet
> they'll face the same issues.
I assume they will enforce the use of safe high-level
interfaces and this will not affect the memory model.
>
> > I could also imagine that the problem here is that it is
> > actually very difficult for compilers to give the guarantess
> > you want, because they evolved from compilers
> > doing optimization for single threads and and one would
> > have to fix a lot of issues in the optimizers. So the
> > actually problem here might be that nobody wants to pay
> > for fixing the compilers.
>
> I actually suspect that most of the work has already been done in practice.
>
> As mentioned, some time ago I checked the whole issue of
> rematerializing loads, and at least gcc doesn't rematerialize loads
> (and I just double-checked: bad_for_rematerialization_p() returns true
> for mem-ops)
>
> I have this memory that people told me that clang similarly
>
> And the C standards committee already made widening stores invalid due
> to threading issues.
That widening stores are not allowed is a consequence
of the memory model when only using local optimization.
There are not explicitely forbidden, and an optimizer that
could see that it does not affect global observable behavior
could theoretically then widen a store where this is safe,
but in practice no compiler can do such things.
>
> Are there other issues? Sure. But remat of memory loads is at least
> one issue, and it's one that has been painful for the kernel - not
> because compilers do it, but because we *fear* compilers doing it so
> much.
I will talk to some compiler people.
Martin
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 23:08 ` David Laight
@ 2025-02-27 13:55 ` Ralf Jung
0 siblings, 0 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-27 13:55 UTC (permalink / raw)
To: David Laight
Cc: Ventura Jack, Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds,
airlied, boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
rust-for-linux
Hi all,
> ...
>>> Unions in C, C++ and Rust (not Rust "enum"/tagged union) are
>>> generally sharp. In Rust, it requires unsafe Rust to read from
>>> a union.
>>
>> Definitely sharp. At least in Rust we have a very clear specification though,
>> since we do allow arbitrary type punning -- you "just" reinterpret whatever
>> bytes are stored in the union, at whatever type you are reading things. There is
>> also no "active variant" or anything like that, you can use any variant at any
>> time, as long as the bytes are "valid" for the variant you are using. (So for
>> instance if you are trying to read a value 0x03 at type `bool`, that is UB.)
>
> That is actually a big f***ing problem.
> The language has to define the exact behaviour when 'bool' doesn't contain
> 0 or 1.
No, it really does not. If you want a variable that can hold all values in
0..256, use `u8`. The entire point of the `bool` type is to represent values
that can only ever be `true` or `false`. So the language requires that when you
do type-unsafe manipulation of raw bytes, and when you then make the choice of
the `bool` type for that code (which you are not forced to!), then you must
indeed uphold the guarantees of `bool`: the data must be `0x00` or `0x01`.
> Much the same as the function call interface defines whether it is the caller
> or called code is responsible for masking the high bits of a register that
> contains a 'char' type.
>
> Now the answer could be that 'and' is (or may be) a bit-wise operation.
> But that isn't UB, just an undefined/unexpected result.
>
> I've actually no idea if/when current gcc 'sanitises' bool values.
> A very old version used to generate really crap code (and I mean REALLY)
> because it repeatedly sanitised the values.
> But IMHO bool just shouldn't exist, it isn't a hardware type and is actually
> expensive to get right.
> If you use 'int' with zero meaning false there is pretty much no ambiguity.
We have many types in Rust that are not hardware types. Users can even define
them themselves:
enum MyBool { MyFalse, MyTrue }
This is, in fact, one of the entire points of higher-level languages like Rust:
to let users define types that represent concepts that are more abstract than
what exists in hardware. Hardware would also tell us that `&i32` and `*const
i32` are basically the same thing, and yet of course there's a world of a
difference between those types in Rust.
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 20:00 ` Martin Uecker
2025-02-26 21:14 ` Linus Torvalds
@ 2025-02-27 14:21 ` Ventura Jack
2025-02-27 15:27 ` H. Peter Anvin
2025-02-28 8:08 ` Ralf Jung
2 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-27 14:21 UTC (permalink / raw)
To: Martin Uecker
Cc: Linus Torvalds, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, Feb 26, 2025 at 1:00 PM Martin Uecker <uecker@tugraz.at> wrote:
>
> I think C++ messed up a lot (including time-travel UB, uninitialized
> variables, aliasing ules and much more), but I do not see
> the problem here.
C++26 actually changes the rules of reading uninitialized
variables from being undefined behavior to being
"erroneous behavior", for the purpose of decreasing instances
that can cause UB. Though programmers can still opt-into
the old behavior with UB, on a case by case basis, for the
sake of performance.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 6:56 ` Martin Uecker
@ 2025-02-27 14:29 ` Steven Rostedt
2025-02-27 17:35 ` Paul E. McKenney
0 siblings, 1 reply; 194+ messages in thread
From: Steven Rostedt @ 2025-02-27 14:29 UTC (permalink / raw)
To: Martin Uecker
Cc: Linus Torvalds, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, 27 Feb 2025 07:56:47 +0100
Martin Uecker <uecker@tugraz.at> wrote:
> Observable is I/O and volatile accesses. These are things considered
> observable from the outside of a process and the only things an
> optimizer has to preserve.
>
> Visibility is related to when stores are visible to other threads of
> the same process. But this is just an internal concept to give
> evaluation of expressions semantics in a multi-threaded
> program when objects are accessed from different threads. But
> the compiler is free to change any aspect of it, as long as the
> observable behavior stays the same.
>
> In practice the difference is not so big for a traditional
> optimizer that only has a limited local view and where
> "another thread" is basically part of the "outside world".
So basically you are saying that if the compiler has access to the entire
program (sees the use cases for variables in all threads) that it can
determine what is visible to other threads and what is not, and optimize
accordingly?
Like LTO in the kernel?
-- Steve
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 11:57 ` Gary Guo
@ 2025-02-27 14:43 ` Ventura Jack
0 siblings, 0 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-27 14:43 UTC (permalink / raw)
To: Gary Guo
Cc: Kent Overstreet, Linus Torvalds, Alice Ryhl, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, Feb 26, 2025 at 1:33 PM Gary Guo <gary@garyguo.net> wrote:
>
>
> If you translate some random C code to all-unsafe Rust I think there's
> a good chance that it's (pedantically) undefined C code but well
> defined Rust code!
I do not believe that this holds all that often. If you look at the bug
reports for one C to Rust transpiler,
https://github.com/immunant/c2rust/issues
some of them have basic C code. A major issue is that C, especially
when "strict aliasing" is turned off through a compiler option,
often in code have aliasing, while unsafe Rust does not protect
against all aliasing and have stricter requirements in some
ways. So it can often be the case that the original C code has
no UB, but the transpiled unsafe Rust version has UB.
The blog posts.
https://lucumr.pocoo.org/2022/1/30/unsafe-rust/
https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
also touch on this.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 21:39 ` Ralf Jung
@ 2025-02-27 15:11 ` Ventura Jack
2025-02-27 15:32 ` Ralf Jung
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-27 15:11 UTC (permalink / raw)
To: Ralf Jung
Cc: Alice Ryhl, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
On Wed, Feb 26, 2025 at 2:39 PM Ralf Jung <post@ralfj.de> wrote:
> > On the other hand, RefinedRust reuses code from Miri.
>
> No, it does not use code from Miri, it is based on RustBelt -- my PhD thesis
> where I formalized a (rather abstract) version of the borrow checker in Coq/Rocq
> (i.e., in a tool for machine-checked proofs) and manually proved some pieces of
> small but tricky unsafe code to be sound.
I see, the reason why I claimed it was because
https://gitlab.mpi-sws.org/lgaeher/refinedrust-dev
"We currently re-use code from the following projects:
miri: https://github.com/rust-lang/miri (under the MIT license)"
but that code might be from RustBelt as you say, or maybe some
less relevant code, I am guessing.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 14:21 ` Ventura Jack
@ 2025-02-27 15:27 ` H. Peter Anvin
0 siblings, 0 replies; 194+ messages in thread
From: H. Peter Anvin @ 2025-02-27 15:27 UTC (permalink / raw)
To: Ventura Jack, Martin Uecker
Cc: Linus Torvalds, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On February 27, 2025 6:21:24 AM PST, Ventura Jack <venturajack85@gmail.com> wrote:
>On Wed, Feb 26, 2025 at 1:00 PM Martin Uecker <uecker@tugraz.at> wrote:
>>
>> I think C++ messed up a lot (including time-travel UB, uninitialized
>> variables, aliasing ules and much more), but I do not see
>> the problem here.
>
>C++26 actually changes the rules of reading uninitialized
>variables from being undefined behavior to being
>"erroneous behavior", for the purpose of decreasing instances
>that can cause UB. Though programmers can still opt-into
>the old behavior with UB, on a case by case basis, for the
>sake of performance.
>
>Best, VJ.
>
>
Of course, that is effectively what one gets if one treats the compiler warning as binding.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 15:11 ` Ventura Jack
@ 2025-02-27 15:32 ` Ralf Jung
0 siblings, 0 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-27 15:32 UTC (permalink / raw)
To: Ventura Jack
Cc: Alice Ryhl, Linus Torvalds, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
Hi VJ,
>> No, it does not use code from Miri, it is based on RustBelt -- my PhD thesis
>> where I formalized a (rather abstract) version of the borrow checker in Coq/Rocq
>> (i.e., in a tool for machine-checked proofs) and manually proved some pieces of
>> small but tricky unsafe code to be sound.
>
> I see, the reason why I claimed it was because
>
> https://gitlab.mpi-sws.org/lgaeher/refinedrust-dev
> "We currently re-use code from the following projects:
> miri: https://github.com/rust-lang/miri (under the MIT license)"
>
> but that code might be from RustBelt as you say, or maybe some
> less relevant code, I am guessing.
Ah, there might be some of the logic for getting the MIR out of rustc, or some
test cases. But the "core parts" of Miri (the actual UB checking and Abstract
Machine implementation) don't have anything to do with RefinedRust.
; Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 22:28 ` Ralf Jung
2025-02-26 23:08 ` David Laight
@ 2025-02-27 17:33 ` Ventura Jack
2025-02-27 17:58 ` Ralf Jung
2025-02-27 17:58 ` Miguel Ojeda
1 sibling, 2 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-27 17:33 UTC (permalink / raw)
To: Ralf Jung
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Wed, Feb 26, 2025 at 3:28 PM Ralf Jung <post@ralfj.de> wrote:
>
> Hi all,
>
> On 26.02.25 19:09, Ventura Jack wrote:
> > Is Miri the only one of its kind in the programming world?
> > There are not many system languages in mass use, and
> > those are the languages that first and foremost deal
> > with undefined behavior. That would make Miri extra impressive.
>
> I am not aware of a comparable tool that would be in wide-spread use, or that is
> carefully aligned with the semantics of an actual compiler.
> For C, there is Cerberus (https://www.cl.cam.ac.uk/~pes20/cerberus/) as an
> executable version of the C specification, but it can only run tiny examples.
> The verified CompCert compiler comes with a semantics one could interpret, but
> that only checks code for compatibility with CompCert C, which has a lot less
> (and a bit more) UB than real C.
> There are also two efforts that turned into commercial tools that I have not
> tried, and for which there is hardly any documentation of how they interpret the
> C standard so it's not clear what a green light from them means when compiling
> with gcc or clang. I also don't know how much real-world code they can actually run.
> - TrustInSoft/tis-interpreter, mostly gone from the web but still available in
> the wayback machine
> (https://web.archive.org/web/20200804061411/https://github.com/TrustInSoft/tis-interpreter/);
> I assume this got integrated into their "TrustInSoft Analyzer" product.
> - kcc, a K-framework based formalization of C that is executable. The public
> repo is dead (https://github.com/kframework/c-semantics) and when I tried to
> build their tool that didn't work. The people behind this have a company that
> offers "RV-Match" as a commercial product claiming to find bugs in C based on "a
> complete formal ISO C11 semantics" so I guess that is where their efforts go now.
>
> For C++ and Zig, I am not aware of anything comparable.
>
> Part of the problem is that in C, 2 people will have 3 ideas for what the
> standard means. Compiler writers and programmers regularly have wildly
> conflicting ideas of what is and is not allowed. There are many different places
> in the standard that have to be scanned to answer "is this well-defined" even
> for very simple programs. (https://godbolt.org/z/rjaWc6EzG is one of my favorite
> examples.) A tool can check a single well-defined semantics, but who gets to
> decide what exactly those semantics are?
> Formalizing the C standard requires extensive interpretation, so I am skeptical
> of everyone who claims that they "formalized the C standard" and built a tool on
> that without extensive evaluation of how their formalization compares to what
> compilers do and what programmers rely on. The Cerberus people have done that
> evaluation (see e.g. https://dl.acm.org/doi/10.1145/2980983.2908081), but none
> of the other efforts have (to my knowledge). Ideally such a formalization effort
> would be done in close collaboration with compiler authors and the committee so
> that the ambiguities in the standard can be resolved and the formalization
> becomes the one canonical interpretation. The Cerberus people are the ones that
> pushed the C provenance formalization through, so they made great progress here.
> However, many issues remain, some quite long-standing (e.g.
> https://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_260.htm and
> https://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_451.htm, which in my eyes
> never got properly resolved by clarifying the standard). Martin and a few others
> are slowly pushing things in the right direction, but it takes a long time.
> Rust, by having a single project in charge of the one canonical implementation
> and the specification, and having an open process that is well-suited for
> incorporating user concerns, can move a lot quicker here. C has a huge
> head-start, Rust has nothing like the C standard, but we are catching up -- and
> our goal is more ambitious than that; we are doing our best to learn from C and
> C++ and concluded that that style of specification is too prone to ambiguity, so
> we are trying to achieve a formally precise unambiguous specification. Wasm
> shows that this can be done, at industry scale, albeit for a small language --
> time we do it for a large one. :)
>
> So, yes I think Miri is fairly unique. But please let me know if I missed something!
>
> (As an aside, the above hopefully also explains why some people in Rust are
> concerned about alternative implementations. We do *not* want the current
> de-factor behavior to ossify and become the specification. We do *not* want the
> specification to just be a description of what the existing implementations at
> the time happen to do, and declare all behavior differences to be UB or
> unspecified or so just because no implementation is willing to adjust their
> behavior to match the rest. We want the specification to be prescriptive, not
> descriptive, and we will adjust the implementation as we see fit to achieve that
> -- within the scope of Rust's stability guarantees. That's also why we are so
> cagey about spelling out the aliasing rules until we are sure we have a good
> enough model.)
Very interesting, thank you for the exhaustive answer.
Might it be accurate to categorize Miri as a
"formal-semantics-based undefined-behavior-detecting interpreter"?
>https://godbolt.org/z/rjaWc6EzG
That example uses a compiler-specific attribute AFAIK, namely
__attribute__((noinline))
When using compiler-specific attributes and options, the
original language is arguably no longer being used, depending
on the attribute. Though a language being inexpressive and
possibly requiring compiler extensions to achieve some goals,
possibly like in this C example, can be a disadvantage in itself.
> [On formalization]
I agree that Rust has some advantages in regards to formalization,
but some of them that I think of, are different from what you
mention here. And I also see some disadvantages.
C is an ancient language, and parsing and handling C is made
more complex by the preprocessor. Rust is a much younger
language that avoided all that pain, and is easier to parse
and handle. C++ is way worse, though might become closer
to Rust with C++ modules.
Rust is more willing to break existing code in projects, causing
previously compiling projects to no longer compile. rustc does this
rarely, but it has happened, also long after Rust 1.0.
From last year, 2024.
https://internals.rust-lang.org/t/type-inference-breakage-in-1-80-has-not-been-handled-well/21374
"Rust 1.80 broke builds of almost all versions of the
very popular time crate (edit: please don't shoot the
messenger in that GitHub thread!!!)
Rust has left only a 4-month old version working.
That was not enough time for the entire Rust
ecosystem to update, especially that older
versions of time were not yanked, and users
had no advance warning that it will stop working.
A crater run found a regression in over 5000 crates,
and that has just been accepted as okay without
any further action! This is below the level of stability
and reliability that Rust should have."
If C was willing to break code as much as Rust, it would be easier to
clean up C.
There is the Rust feature "editions", which is interesting,
but in my opinion also very experimental from a
programming language theory perspective. It does
help avoid breakage while letting the languages developers
clean up the language and improve it, but has some other
consequences, such as source code having different
semantics in different editions. Automated upgrade
tools help with this, but does not handle all consequences.
If C was made from scratch today, by experts at type theory,
then C would likely have a much simpler type system and type
checking than Rust, and would likely be much easier to formalize.
Programs in C would likely still often be more complex than
in C++ or Rust, however.
>[Omitted] We do *not* want the
> specification to just be a description of what the existing implementations at
> the time happen to do, and declare all behavior differences to be UB or
> unspecified or so just because no implementation is willing to adjust their
> behavior to match the rest. [Omitted]
I have seen some Rust proponents literally say that there is
a specification for Rust, and that it is called rustc/LLVM.
Though those specific individuals may not have been the
most credible individuals.
A fear I have is that there may be hidden reliance in
multiple different ways on LLVM, as well as on rustc.
Maybe even very deeply so. The complexity of Rust's
type system and rustc's type system checking makes
me more worried about this point. If there are hidden
elements, they may turn out to be very difficult to fix,
especially if they are discovered to be fundamental.
While having one compiler can be an advantage in
some ways, it can arguably be a disadvantage
in some other ways, as you acknowledge as well
if I understand you correctly.
You mention ossifying, but the more popular Rust becomes,
the more painful breakage will be, and the less suited
Rust will be as a research language.
Using Crater to test existing Rust projects with, as you
mention later in your email, is an interesting and
possibly very valuable approach, but I do not know
its limitations and disadvantages. Some projects
will be closed source, and thus will presumably
not be checked, as I understand it.
Does Crater run Rust for Linux and relevant Rust
kernel code?
I hope that any new language at least has its
language developers ensure that they have a type
system that is formalized and proven correct
before that langauge's 1.0 release.
Since fixing a type system later can be difficult or
practically impossible. A complex type system
and complex type checking can be a larger risk in this
regard relative to a simple type system and simple
type checking, especially the more time passes and
the more the language is used and have code
written in it, making it more difficult to fix the language
due to code breakage costing more.
Some languages that broke backwards compatibility
arguably suffered or died because of it, like Perl 6
or Scala 3. Python 2 to 3 was arguably successful but painful.
Scala 3 even had automated conversion tools AFAIK.
> > There are some issues in Rust that I am curious as to
> > your views on. rustc or the Rust language has some type
> > system holes, which still causes problems for rustc and
> > their developers.
> >
> > https://github.com/lcnr/solver-woes/issues/1
> > https://github.com/rust-lang/rust/issues/75992
> >
> > Those kinds of issues seem difficult to solve.
> >
> > In your opinion, is it accurate to say that the Rust language
> > developers are working on a new type system for
> > Rust-the-language and a new solver for rustc, and that
> > they are trying to make the new type system and new solver
> > as backwards compatible as possible?
>
> It's not really a new type system. It's a new implementation for the same type
> system. But yes there is work on a new "solver" (that I am not involved in) that
> should finally fix some of the long-standing type system bugs. Specifically,
> this is a "trait solver", i.e. it is the component responsible for dealing with
> trait constraints. Due to some unfortunate corner-case behaviors of the old,
> organically grown solver, it's very hard to do this in a backwards-compatible
> way, but we have infrastructure for extensive ecosystem-wide testing to judge
> the consequences of any given potential breaking change and ensure that almost
> all existing code keeps working. In fact, Rust 1.84 already started using the
> new solver for some things
> (https://blog.rust-lang.org/2025/01/09/Rust-1.84.0.html) -- did you notice?
> Hopefully not. :)
If it is not a new type system, why then do they talk about
backwards compatibility for existing Rust projects?
If the type system is not changed, existing projects would
still type check. And in this repository of one of the main
Rust language developers as I understand it, several
issues are labeled with "S-fear".
https://github.com/lcnr/solver-woes/issues
They have also been working on this new solver for
several years. Reading through the issues, a lot of
the problems seem very hard.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 14:29 ` Steven Rostedt
@ 2025-02-27 17:35 ` Paul E. McKenney
2025-02-27 18:13 ` Kent Overstreet
0 siblings, 1 reply; 194+ messages in thread
From: Paul E. McKenney @ 2025-02-27 17:35 UTC (permalink / raw)
To: Steven Rostedt
Cc: Martin Uecker, Linus Torvalds, Ralf Jung, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, Feb 27, 2025 at 09:29:49AM -0500, Steven Rostedt wrote:
> On Thu, 27 Feb 2025 07:56:47 +0100
> Martin Uecker <uecker@tugraz.at> wrote:
>
> > Observable is I/O and volatile accesses. These are things considered
> > observable from the outside of a process and the only things an
> > optimizer has to preserve.
> >
> > Visibility is related to when stores are visible to other threads of
> > the same process. But this is just an internal concept to give
> > evaluation of expressions semantics in a multi-threaded
> > program when objects are accessed from different threads. But
> > the compiler is free to change any aspect of it, as long as the
> > observable behavior stays the same.
> >
> > In practice the difference is not so big for a traditional
> > optimizer that only has a limited local view and where
> > "another thread" is basically part of the "outside world".
>
> So basically you are saying that if the compiler has access to the entire
> program (sees the use cases for variables in all threads) that it can
> determine what is visible to other threads and what is not, and optimize
> accordingly?
>
> Like LTO in the kernel?
LTO is a small step in that direction. In the most extreme case, the
compiler simply takes a quick glance at the code and the input data and
oracularly generates the output.
Which is why my arguments against duplicating atomic loads have been
based on examples where doing so breaks basic arithmetic. :-/
Thanx, Paul
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 17:33 ` Ventura Jack
@ 2025-02-27 17:58 ` Ralf Jung
2025-02-27 19:06 ` Ventura Jack
2025-02-27 17:58 ` Miguel Ojeda
1 sibling, 1 reply; 194+ messages in thread
From: Ralf Jung @ 2025-02-27 17:58 UTC (permalink / raw)
To: Ventura Jack
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
Hi VJ,
>> I am not aware of a comparable tool that would be in wide-spread use, or that is
>> carefully aligned with the semantics of an actual compiler.
>> For C, there is Cerberus (https://www.cl.cam.ac.uk/~pes20/cerberus/) as an
>> executable version of the C specification, but it can only run tiny examples.
>> The verified CompCert compiler comes with a semantics one could interpret, but
>> that only checks code for compatibility with CompCert C, which has a lot less
>> (and a bit more) UB than real C.
>> There are also two efforts that turned into commercial tools that I have not
>> tried, and for which there is hardly any documentation of how they interpret the
>> C standard so it's not clear what a green light from them means when compiling
>> with gcc or clang. I also don't know how much real-world code they can actually run.
>> - TrustInSoft/tis-interpreter, mostly gone from the web but still available in
>> the wayback machine
>> (https://web.archive.org/web/20200804061411/https://github.com/TrustInSoft/tis-interpreter/);
>> I assume this got integrated into their "TrustInSoft Analyzer" product.
>> - kcc, a K-framework based formalization of C that is executable. The public
>> repo is dead (https://github.com/kframework/c-semantics) and when I tried to
>> build their tool that didn't work. The people behind this have a company that
>> offers "RV-Match" as a commercial product claiming to find bugs in C based on "a
>> complete formal ISO C11 semantics" so I guess that is where their efforts go now.
>>
>> For C++ and Zig, I am not aware of anything comparable.
>>
>> Part of the problem is that in C, 2 people will have 3 ideas for what the
>> standard means. Compiler writers and programmers regularly have wildly
>> conflicting ideas of what is and is not allowed. There are many different places
>> in the standard that have to be scanned to answer "is this well-defined" even
>> for very simple programs. (https://godbolt.org/z/rjaWc6EzG is one of my favorite
>> examples.) A tool can check a single well-defined semantics, but who gets to
>> decide what exactly those semantics are?
>> Formalizing the C standard requires extensive interpretation, so I am skeptical
>> of everyone who claims that they "formalized the C standard" and built a tool on
>> that without extensive evaluation of how their formalization compares to what
>> compilers do and what programmers rely on. The Cerberus people have done that
>> evaluation (see e.g. https://dl.acm.org/doi/10.1145/2980983.2908081), but none
>> of the other efforts have (to my knowledge). Ideally such a formalization effort
>> would be done in close collaboration with compiler authors and the committee so
>> that the ambiguities in the standard can be resolved and the formalization
>> becomes the one canonical interpretation. The Cerberus people are the ones that
>> pushed the C provenance formalization through, so they made great progress here.
>> However, many issues remain, some quite long-standing (e.g.
>> https://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_260.htm and
>> https://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_451.htm, which in my eyes
>> never got properly resolved by clarifying the standard). Martin and a few others
>> are slowly pushing things in the right direction, but it takes a long time.
>> Rust, by having a single project in charge of the one canonical implementation
>> and the specification, and having an open process that is well-suited for
>> incorporating user concerns, can move a lot quicker here. C has a huge
>> head-start, Rust has nothing like the C standard, but we are catching up -- and
>> our goal is more ambitious than that; we are doing our best to learn from C and
>> C++ and concluded that that style of specification is too prone to ambiguity, so
>> we are trying to achieve a formally precise unambiguous specification. Wasm
>> shows that this can be done, at industry scale, albeit for a small language --
>> time we do it for a large one. :)
>>
>> So, yes I think Miri is fairly unique. But please let me know if I missed something!
>>
>> (As an aside, the above hopefully also explains why some people in Rust are
>> concerned about alternative implementations. We do *not* want the current
>> de-factor behavior to ossify and become the specification. We do *not* want the
>> specification to just be a description of what the existing implementations at
>> the time happen to do, and declare all behavior differences to be UB or
>> unspecified or so just because no implementation is willing to adjust their
>> behavior to match the rest. We want the specification to be prescriptive, not
>> descriptive, and we will adjust the implementation as we see fit to achieve that
>> -- within the scope of Rust's stability guarantees. That's also why we are so
>> cagey about spelling out the aliasing rules until we are sure we have a good
>> enough model.)
>
> Very interesting, thank you for the exhaustive answer.
>
> Might it be accurate to categorize Miri as a
> "formal-semantics-based undefined-behavior-detecting interpreter"?
Sure, why not. :)
>
>> https://godbolt.org/z/rjaWc6EzG
>
> That example uses a compiler-specific attribute AFAIK, namely
>
> __attribute__((noinline))
>
> When using compiler-specific attributes and options, the
> original language is arguably no longer being used, depending
> on the attribute. Though a language being inexpressive and
> possibly requiring compiler extensions to achieve some goals,
> possibly like in this C example, can be a disadvantage in itself.
That attribute just exists to make the example small and fit in a single file.
If you user multiple translation units, you can achieve the same effect without
the attribute. Anyway compilers promise (I hope^^) that that particular
attribute has no bearing on whether the code has UB. So, the question of whether
the program without the attribute has UB is still a very interesting one.
At least clang treats this code as having UB, and one can construct a similar
example for gcc. IMO this is not backed by the standard itself, though it can be
considered backed by some defect reports -- but those were for earlier versions
of the standard so technically, they do not apply to C23.
>> [On formalization]
>
> I agree that Rust has some advantages in regards to formalization,
> but some of them that I think of, are different from what you
> mention here. And I also see some disadvantages.
>
> C is an ancient language, and parsing and handling C is made
> more complex by the preprocessor. Rust is a much younger
> language that avoided all that pain, and is easier to parse
> and handle. C++ is way worse, though might become closer
> to Rust with C++ modules.
>
> Rust is more willing to break existing code in projects, causing
> previously compiling projects to no longer compile. rustc does this
> rarely, but it has happened, also long after Rust 1.0.
>
> From last year, 2024.
>
> https://internals.rust-lang.org/t/type-inference-breakage-in-1-80-has-not-been-handled-well/21374
> "Rust 1.80 broke builds of almost all versions of the
> very popular time crate (edit: please don't shoot the
> messenger in that GitHub thread!!!)
>
> Rust has left only a 4-month old version working.
> That was not enough time for the entire Rust
> ecosystem to update, especially that older
> versions of time were not yanked, and users
> had no advance warning that it will stop working.
>
> A crater run found a regression in over 5000 crates,
> and that has just been accepted as okay without
> any further action! This is below the level of stability
> and reliability that Rust should have."
>
> If C was willing to break code as much as Rust, it would be easier to
> clean up C.
Is that true? Gcc updates do break code.
>> [Omitted] We do *not* want the
>> specification to just be a description of what the existing implementations at
>> the time happen to do, and declare all behavior differences to be UB or
>> unspecified or so just because no implementation is willing to adjust their
>> behavior to match the rest. [Omitted]
>
> I have seen some Rust proponents literally say that there is
> a specification for Rust, and that it is called rustc/LLVM.
> Though those specific individuals may not have been the
> most credible individuals.
Maybe don't take the word of random Rust proponents on the internet as anything
more than that. :) I can't speak for the entire Rust project, but I can speak
as lead of the operational semantics team of the Rust project -- no, we do not
consider rustc/LLVM to be a satisfying spec. Producing a proper spec is on the
project agenda.
> A fear I have is that there may be hidden reliance in
> multiple different ways on LLVM, as well as on rustc.
> Maybe even very deeply so. The complexity of Rust's
> type system and rustc's type system checking makes
> me more worried about this point. If there are hidden
> elements, they may turn out to be very difficult to fix,
> especially if they are discovered to be fundamental.
> While having one compiler can be an advantage in
> some ways, it can arguably be a disadvantage
> in some other ways, as you acknowledge as well
> if I understand you correctly.
The Rust type system has absolutely nothing to do with LLVM. Those are
completely separate parts of the compiler. So I don't see any way that LLVM
could possibly influence our type system.
We already discussed previously that indeed, the Rust operational semantics has
a risk of overfitting to LLVM. I acknowledge that.
> You mention ossifying, but the more popular Rust becomes,
> the more painful breakage will be, and the less suited
> Rust will be as a research language.
I do not consider Rust a research language. :)
> Does Crater run Rust for Linux and relevant Rust
> kernel code?
Even better: every single change that lands in Rust checks Rust-for-Linux as
part of our CI.
> I hope that any new language at least has its
> language developers ensure that they have a type
> system that is formalized and proven correct
> before that langauge's 1.0 release.
> Since fixing a type system later can be difficult or
> practically impossible. A complex type system
> and complex type checking can be a larger risk in this
> regard relative to a simple type system and simple
> type checking, especially the more time passes and
> the more the language is used and have code
> written in it, making it more difficult to fix the language
> due to code breakage costing more.
Uff, that's a very high bar to pass.^^ I think there's maybe two languages ever
that meet this bar? SML and wasm.
>>> There are some issues in Rust that I am curious as to
>>> your views on. rustc or the Rust language has some type
>>> system holes, which still causes problems for rustc and
>>> their developers.
>>>
>>> https://github.com/lcnr/solver-woes/issues/1
>>> https://github.com/rust-lang/rust/issues/75992
>>>
>>> Those kinds of issues seem difficult to solve.
>>>
>>> In your opinion, is it accurate to say that the Rust language
>>> developers are working on a new type system for
>>> Rust-the-language and a new solver for rustc, and that
>>> they are trying to make the new type system and new solver
>>> as backwards compatible as possible?
>>
>> It's not really a new type system. It's a new implementation for the same type
>> system. But yes there is work on a new "solver" (that I am not involved in) that
>> should finally fix some of the long-standing type system bugs. Specifically,
>> this is a "trait solver", i.e. it is the component responsible for dealing with
>> trait constraints. Due to some unfortunate corner-case behaviors of the old,
>> organically grown solver, it's very hard to do this in a backwards-compatible
>> way, but we have infrastructure for extensive ecosystem-wide testing to judge
>> the consequences of any given potential breaking change and ensure that almost
>> all existing code keeps working. In fact, Rust 1.84 already started using the
>> new solver for some things
>> (https://blog.rust-lang.org/2025/01/09/Rust-1.84.0.html) -- did you notice?
>> Hopefully not. :)
>
> If it is not a new type system, why then do they talk about
> backwards compatibility for existing Rust projects?
If you make a tiny change to a type system, is it a "new type system"? "new type
system" sounds like "from-scratch redesign". That's not what happens.
> If the type system is not changed, existing projects would
> still type check. And in this repository of one of the main
> Rust language developers as I understand it, several
> issues are labeled with "S-fear".
>
> https://github.com/lcnr/solver-woes/issues
>
> They have also been working on this new solver for
> several years. Reading through the issues, a lot of
> the problems seem very hard.
It is hard, indeed. But last I knew, the types team is confident that they can
pull it off, and I have confidence in them.
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 17:33 ` Ventura Jack
2025-02-27 17:58 ` Ralf Jung
@ 2025-02-27 17:58 ` Miguel Ojeda
2025-02-27 19:25 ` Ventura Jack
1 sibling, 1 reply; 194+ messages in thread
From: Miguel Ojeda @ 2025-02-27 17:58 UTC (permalink / raw)
To: Ventura Jack
Cc: Ralf Jung, Kent Overstreet, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Thu, Feb 27, 2025 at 6:34 PM Ventura Jack <venturajack85@gmail.com> wrote:
>
> I have seen some Rust proponents literally say that there is
> a specification for Rust, and that it is called rustc/LLVM.
> Though those specific individuals may not have been the
> most credible individuals.
These "Some say..." arguments are not really useful, to be honest.
> A fear I have is that there may be hidden reliance in
> multiple different ways on LLVM, as well as on rustc.
> Maybe even very deeply so. The complexity of Rust's
> type system and rustc's type system checking makes
> me more worried about this point. If there are hidden
> elements, they may turn out to be very difficult to fix,
> especially if they are discovered to be fundamental.
If you have concrete concerns (apart from the ones you already raised
so far which are not really applicable), please explain them.
Otherwise, this sounds a bit like an appeal to fear, sorry.
> You mention ossifying, but the more popular Rust becomes,
> the more painful breakage will be, and the less suited
> Rust will be as a research language.
Rust is not a research language -- I guess you may be including
features that are not promised to be stable, but that means even C
would a research language... :)
> Using Crater to test existing Rust projects with, as you
> mention later in your email, is an interesting and
> possibly very valuable approach, but I do not know
> its limitations and disadvantages. Some projects
> will be closed source, and thus will presumably
> not be checked, as I understand it.
Well, one advantage for open source ;)
> Does Crater run Rust for Linux and relevant Rust
> kernel code?
We do something better: every PR is required to build part of the Rust
kernel code in one config.
That does not even happen with either Clang or GCC (though the Clang
maintainer was open to a proposal when I talked to him about it).
Cheers,
Miguel
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 5:52 ` Linus Torvalds
2025-02-27 6:56 ` Martin Uecker
@ 2025-02-27 18:00 ` Ventura Jack
2025-02-27 18:44 ` Ralf Jung
2 siblings, 0 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-27 18:00 UTC (permalink / raw)
To: Linus Torvalds
Cc: Martin Uecker, Ralf Jung, Paul E. McKenney, Alice Ryhl,
Kent Overstreet, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, Feb 26, 2025 at 10:52 PM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> So "safe rust" should generally not be impacted, and you can make the
> very true argument that safe rust can be optimized more aggressively
> and migth be faster than unsafe rust.
>
> And I think that should be seen as a feature, and as a basic tenet of
> safe vs unsafe. A compiler *should* be able to do better when it
> understands the code fully.
For safe Rust and unsafe Rust, practice is in some cases the reverse.
Like how some safe Rust code uses runtime bounds checking,
and unsafe Rust code enables using unsafe-but-faster alternatives.
https://doc.rust-lang.org/std/primitive.slice.html#method.get_unchecked
https://users.rust-lang.org/t/if-a-project-is-built-in-release-mode-are-there-any-runtime-checks-enabled-by-default/51349
Safe Rust can sometimes have automated optimizations done
by the compiler. This sometimes is done, for instance to do
autovectorization as I understand it. Some Rust libraries
for decoding images have achieved comparable performance
to Wuffs that way. But, some Rust developers have complained
that in their projects, that sometimes, in one rustc compiler
version they get autovectorization and good performance,
but after they upgraded compiler version, the optimization
was no longer done by the compiler, and performance suffered
from it.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 17:35 ` Paul E. McKenney
@ 2025-02-27 18:13 ` Kent Overstreet
2025-02-27 19:10 ` Paul E. McKenney
0 siblings, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-27 18:13 UTC (permalink / raw)
To: Paul E. McKenney
Cc: Steven Rostedt, Martin Uecker, Linus Torvalds, Ralf Jung,
Alice Ryhl, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, Feb 27, 2025 at 09:35:10AM -0800, Paul E. McKenney wrote:
> On Thu, Feb 27, 2025 at 09:29:49AM -0500, Steven Rostedt wrote:
> > On Thu, 27 Feb 2025 07:56:47 +0100
> > Martin Uecker <uecker@tugraz.at> wrote:
> >
> > > Observable is I/O and volatile accesses. These are things considered
> > > observable from the outside of a process and the only things an
> > > optimizer has to preserve.
> > >
> > > Visibility is related to when stores are visible to other threads of
> > > the same process. But this is just an internal concept to give
> > > evaluation of expressions semantics in a multi-threaded
> > > program when objects are accessed from different threads. But
> > > the compiler is free to change any aspect of it, as long as the
> > > observable behavior stays the same.
> > >
> > > In practice the difference is not so big for a traditional
> > > optimizer that only has a limited local view and where
> > > "another thread" is basically part of the "outside world".
> >
> > So basically you are saying that if the compiler has access to the entire
> > program (sees the use cases for variables in all threads) that it can
> > determine what is visible to other threads and what is not, and optimize
> > accordingly?
> >
> > Like LTO in the kernel?
>
> LTO is a small step in that direction. In the most extreme case, the
> compiler simply takes a quick glance at the code and the input data and
> oracularly generates the output.
>
> Which is why my arguments against duplicating atomic loads have been
> based on examples where doing so breaks basic arithmetic. :-/
Please tell me that wasn't something that seriously needed to be said...
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 23:16 ` Linus Torvalds
2025-02-27 0:17 ` Kent Overstreet
2025-02-27 0:26 ` comex
@ 2025-02-27 18:33 ` Ralf Jung
2025-02-27 19:15 ` Linus Torvalds
2025-03-06 19:16 ` Ventura Jack
3 siblings, 1 reply; 194+ messages in thread
From: Ralf Jung @ 2025-02-27 18:33 UTC (permalink / raw)
To: Linus Torvalds, Kent Overstreet
Cc: Martin Uecker, Paul E. McKenney, Alice Ryhl, Ventura Jack,
Gary Guo, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, hpa, ksummit, linux-kernel, miguel.ojeda.sandonis,
rust-for-linux
Hi Linus,
On 27.02.25 00:16, Linus Torvalds wrote:
> On Wed, 26 Feb 2025 at 14:27, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>>
>> This is another one that's entirely eliminated due to W^X references.
>
> Are you saying rust cannot have global flags?
The way you do global flags in Rust is like this:
static FLAG: AtomicBool = AtomicBool::new(false);
// Thread A
FLAG.store(true, Ordering::SeqCst); // or release/acquire/relaxed
// Thread B
let val = FLAG.load(Ordering::SeqCst);
if val { // or release/acquire/relaxed
// ...
}
println!("{}", val);
If you do this, the TOCTOU issues you mention all disappear. The compiler is
indeed *not* allowed to re-load `FLAG` a second time for the `println`.
If you try do to do this without atomics, the program has a data race, and that
is considered UB in Rust just like in C and C++. So, you cannot do concurrency
with "*ptr = val;" or "ptr2.copy_from(ptr1)" or anything like that. You can only
do concurrency with atomics. That's how compilers reconcile "optimize sequential
code where there's no concurrency concerns" with "give programmers the ability
to reliably program concurrent systems": the programmer has to tell the compiler
whenever concurrency concerns are in play. This may sound terribly hard, but the
Rust type system is pretty good at tracking this, so in practice it is generally
not a big problem to keep track of which data can be accessed concurrently and
which cannot.
Just to be clear, since I know you don't like "atomic objects": Rust does not
have atomic objects. The AtomicBool type is primarily a convenience so that you
don't accidentally cause a data race by doing concurrent non-atomic accesses.
But ultimately, the underlying model is based on the properties of individual
memory accesses (non-atomic, atomic-seqcst, atomic-relaxed, ...).
By using the C++ memory model (in an access-based way, which is possible -- the
"object-based" view is not fundamental to the model), we can have reliable
concurrent programming (no TOCTOU introduced by the compiler) while also still
considering (non-volatile) memory accesses to be entirely "not observable" as
far as compiler guarantees go. The load and store in the example above are not
"observable" in that sense. After all, it's not the loads and stores that
matter, it's what the program does with the values it loads. However, the
abstract description of the possible behaviors of the source program above
*does* guarantee that `val` has the same value everywhere it is used, and
therefore everything you do with `val` that you can actually see (like printing,
or using it to cause MMIO accesses, or whatever) has to behave in a consistent
way. That may sound round-about, but it does square the circle successfully, if
one is willing to accept "the programmer has to tell the compiler whenever
concurrency concerns are in play". As far as I understand, the kernel already
effectively does this with a suite of macros, so this should not be a
fundamentally new constraint.
Kind regards,
Ralf
>
> That seems unlikely. And broken if so.
>
>> IOW: if you're writing code where rematerializing reads is even a
>> _concern_ in Rust, then you had to drop to unsafe {} to do it - and your
>> code is broken, and yes it will have UB.
>
> If you need to drop to unsafe mode just to read a global flag that may
> be set concurrently, you're doing something wrong as a language
> designer.
>
> And if your language then rematerializes reads, the language is shit.
>
> Really.
>
> Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 5:52 ` Linus Torvalds
2025-02-27 6:56 ` Martin Uecker
2025-02-27 18:00 ` Ventura Jack
@ 2025-02-27 18:44 ` Ralf Jung
2 siblings, 0 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-27 18:44 UTC (permalink / raw)
To: Linus Torvalds, Martin Uecker
Cc: Paul E. McKenney, Alice Ryhl, Ventura Jack, Kent Overstreet,
Gary Guo, airlied, boqun.feng, david.laight.linux, ej, gregkh,
hch, hpa, ksummit, linux-kernel, miguel.ojeda.sandonis,
rust-for-linux
Hi,
> So "safe rust" should generally not be impacted, and you can make the
> very true argument that safe rust can be optimized more aggressively
> and migth be faster than unsafe rust.
>
> And I think that should be seen as a feature, and as a basic tenet of
> safe vs unsafe. A compiler *should* be able to do better when it
> understands the code fully.
That's not quite how it works in Rust. One basic tenet of unsafe is that unsafe
does not impact program semantics at all. It would be very surprising to most
Rust folks if adding or removing or changing the scope of an unsafe block could
change what my program does (assuming the program still builds and passes the
usual safety checks).
Now, is there an interesting design space for a language where the programmer
somehow marks blocks of code where the semantics should be "more careful"?
Absolutely, I think that is quite interesting. However, it's also not at all
clear to me how that should actually be done, if you try to get down to it and
write out the proper precise, ideally even formal, spec. Rust is not exploring
that design space, at least not thus far. In fact, it is common in Rust to use
`unsafe` to get better performance (e.g., by using a not-bounds-checked array
access), and so it would be counter to the goals of those people if we then
optimized their code less because it uses `unsafe`.
There's also the problem that quite a few optimizations rely on "universal
properties" -- properties that are true everywhere in the program. If you allow
even the smallest exception, that reasoning breaks down. Aliasing rules are an
example of that: there's no point in saying "references are subject to strict
aliasing requirements in safe code, but in unsafe blocks you are allowed to
break that". That would be useless, then we might as well remove the aliasing
requirements entirely (for the optimizer; we'd keep the borrow checker of
course). The entire point of aliasing requirements is that when I optimize safe
code with no unsafe code in sight, I can make assumptions about the code in the
rest of the program. If I cannot make those assumptions any more, because some
unsafe code somewhere might actually legally break the aliasing rules, then I
cannot even optimize safe code any more. (I can still do the always-correct
purely local aliasing analysis you mentioned, of course. But I can no longer use
the Rust type system to provide any guidance, not even in entirely safe code.)
Kind regards,
Ralf
>
>> There would certainly be opposition if this fundamentally
>> diverges from C++ because no compiler framework will seriously
>> consider implementing a completely different memory model
>> for C (or for Rust) than for C++.
>
> Well, if the C++ peoiple end up working on some "safe C" model, I bet
> they'll face the same issues.
>
>> I could also imagine that the problem here is that it is
>> actually very difficult for compilers to give the guarantess
>> you want, because they evolved from compilers
>> doing optimization for single threads and and one would
>> have to fix a lot of issues in the optimizers. So the
>> actually problem here might be that nobody wants to pay
>> for fixing the compilers.
>
> I actually suspect that most of the work has already been done in practice.
>
> As mentioned, some time ago I checked the whole issue of
> rematerializing loads, and at least gcc doesn't rematerialize loads
> (and I just double-checked: bad_for_rematerialization_p() returns true
> for mem-ops)
>
> I have this memory that people told me that clang similarly
>
> And the C standards committee already made widening stores invalid due
> to threading issues.
>
> Are there other issues? Sure. But remat of memory loads is at least
> one issue, and it's one that has been painful for the kernel - not
> because compilers do it, but because we *fear* compilers doing it so
> much.
>
> Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 17:58 ` Ralf Jung
@ 2025-02-27 19:06 ` Ventura Jack
2025-02-27 19:45 ` Ralf Jung
0 siblings, 1 reply; 194+ messages in thread
From: Ventura Jack @ 2025-02-27 19:06 UTC (permalink / raw)
To: Ralf Jung
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Thu, Feb 27, 2025 at 10:58 AM Ralf Jung <post@ralfj.de> wrote:
> >> https://godbolt.org/z/rjaWc6EzG
> >
> > That example uses a compiler-specific attribute AFAIK, namely
> >
> > __attribute__((noinline))
> >
> > When using compiler-specific attributes and options, the
> > original language is arguably no longer being used, depending
> > on the attribute. Though a language being inexpressive and
> > possibly requiring compiler extensions to achieve some goals,
> > possibly like in this C example, can be a disadvantage in itself.
>
> That attribute just exists to make the example small and fit in a single file.
> If you user multiple translation units, you can achieve the same effect without
> the attribute. Anyway compilers promise (I hope^^) that that particular
> attribute has no bearing on whether the code has UB. So, the question of whether
> the program without the attribute has UB is still a very interesting one.
>
> At least clang treats this code as having UB, and one can construct a similar
> example for gcc. IMO this is not backed by the standard itself, though it can be
> considered backed by some defect reports -- but those were for earlier versions
> of the standard so technically, they do not apply to C23.
That is fair. For C++26, I suspect that the behavior will actually
be officially defined as "erroneous behavior". For C, it is very
unfortunate if the compilers are more strict than the standard
in this case.
I wonder why if that is the case here. C and C++ (also before C++26)
differ on that subject. The differences between C and C++ have
likely caused bugs and issues for both compilers and users.
Though the cause could also be something else.
I am surprised that the C standard is lax on this point in some
cases. It is related to values that are or are not trap representations/
non-value representations, and variables that could or could
not be registers, as I understand one explanation I found.
> > Rust is more willing to break existing code in projects, causing
> > previously compiling projects to no longer compile. rustc does this
> > rarely, but it has happened, also long after Rust 1.0.
> >
> > From last year, 2024.
> >
> > https://internals.rust-lang.org/t/type-inference-breakage-in-1-80-has-not-been-handled-well/21374
> > "Rust 1.80 broke builds of almost all versions of the
> > very popular time crate (edit: please don't shoot the
> > messenger in that GitHub thread!!!)
> >
> > Rust has left only a 4-month old version working.
> > That was not enough time for the entire Rust
> > ecosystem to update, especially that older
> > versions of time were not yanked, and users
> > had no advance warning that it will stop working.
> >
> > A crater run found a regression in over 5000 crates,
> > and that has just been accepted as okay without
> > any further action! This is below the level of stability
> > and reliability that Rust should have."
> >
> > If C was willing to break code as much as Rust, it would be easier to
> > clean up C.
>
> Is that true? Gcc updates do break code.
Surely not as much as Rust, right? From what I hear from users
of Rust and of C, some Rust developers complain about
Rust breaking a lot and being unstable, while I instead
hear complaints about C and C++ being unwilling to break
compatibility.
Rust does admittedly a lot of the time have tools to
mitigate it, but Rust sometimes go beyond that.
C code from 20 years ago can often be compiled
without modification on a new compiler, that is a common
experience I hear about. While I do not know if that
would hold true for Rust code. Though Rust has editions.
The time crate breaking example above does not
seem nice.
> > A fear I have is that there may be hidden reliance in
> > multiple different ways on LLVM, as well as on rustc.
> > Maybe even very deeply so. The complexity of Rust's
> > type system and rustc's type system checking makes
> > me more worried about this point. If there are hidden
> > elements, they may turn out to be very difficult to fix,
> > especially if they are discovered to be fundamental.
> > While having one compiler can be an advantage in
> > some ways, it can arguably be a disadvantage
> > in some other ways, as you acknowledge as well
> > if I understand you correctly.
>
> The Rust type system has absolutely nothing to do with LLVM. Those are
> completely separate parts of the compiler. So I don't see any way that LLVM
> could possibly influence our type system.
Sorry for the ambiguity, I packed too much different
information into the same block.
> > You mention ossifying, but the more popular Rust becomes,
> > the more painful breakage will be, and the less suited
> > Rust will be as a research language.
>
> I do not consider Rust a research language. :)
It reminds me of Scala, in some ways, and some complained
about Scala having too much of a research and experimental
focus. I have heard similar complaints about Rust being
too experimental, and that was part of why they did not
wish to adopt it in some organizations. On the other hand,
Amazon Web Services and other companies already
use Rust extensively. AWS might have more than 300
Rust developer employed. The more usage and code,
the more painful breaking changes might be.
> > I hope that any new language at least has its
> > language developers ensure that they have a type
> > system that is formalized and proven correct
> > before that langauge's 1.0 release.
> > Since fixing a type system later can be difficult or
> > practically impossible. A complex type system
> > and complex type checking can be a larger risk in this
> > regard relative to a simple type system and simple
> > type checking, especially the more time passes and
> > the more the language is used and have code
> > written in it, making it more difficult to fix the language
> > due to code breakage costing more.
>
> Uff, that's a very high bar to pass.^^ I think there's maybe two languages ever
> that meet this bar? SML and wasm.
You may be right about the bar being too high.
I would have hoped that it would be easier to achieve
with modern programming language research and
advances.
> >>> There are some issues in Rust that I am curious as to
> >>> your views on. rustc or the Rust language has some type
> >>> system holes, which still causes problems for rustc and
> >>> their developers.
> >>>
> >>> https://github.com/lcnr/solver-woes/issues/1
> >>> https://github.com/rust-lang/rust/issues/75992
> >>>
> >>> Those kinds of issues seem difficult to solve.
> >>>
> >>> In your opinion, is it accurate to say that the Rust language
> >>> developers are working on a new type system for
> >>> Rust-the-language and a new solver for rustc, and that
> >>> they are trying to make the new type system and new solver
> >>> as backwards compatible as possible?
> >>
> >> It's not really a new type system. It's a new implementation for the same type
> >> system. But yes there is work on a new "solver" (that I am not involved in) that
> >> should finally fix some of the long-standing type system bugs. Specifically,
> >> this is a "trait solver", i.e. it is the component responsible for dealing with
> >> trait constraints. Due to some unfortunate corner-case behaviors of the old,
> >> organically grown solver, it's very hard to do this in a backwards-compatible
> >> way, but we have infrastructure for extensive ecosystem-wide testing to judge
> >> the consequences of any given potential breaking change and ensure that almost
> >> all existing code keeps working. In fact, Rust 1.84 already started using the
> >> new solver for some things
> >> (https://blog.rust-lang.org/2025/01/09/Rust-1.84.0.html) -- did you notice?
> >> Hopefully not. :)
> >
> > If it is not a new type system, why then do they talk about
> > backwards compatibility for existing Rust projects?
>
> If you make a tiny change to a type system, is it a "new type system"? "new type
> system" sounds like "from-scratch redesign". That's not what happens.
I can see your point, but a different type system would be
different. It may be a matter of definition. In practice, the
significance and consequences would arguably depend on
how much backwards compatibility it has, and how many and
how much existing projects are broken.
So far, it appears to require a lot of work and effort for
some of the Rust language developers, and my impression
at a glance is that they have significant expertise, yet have
worked on it for years.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 18:13 ` Kent Overstreet
@ 2025-02-27 19:10 ` Paul E. McKenney
0 siblings, 0 replies; 194+ messages in thread
From: Paul E. McKenney @ 2025-02-27 19:10 UTC (permalink / raw)
To: Kent Overstreet
Cc: Steven Rostedt, Martin Uecker, Linus Torvalds, Ralf Jung,
Alice Ryhl, Ventura Jack, Gary Guo, airlied, boqun.feng,
david.laight.linux, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, Feb 27, 2025 at 01:13:40PM -0500, Kent Overstreet wrote:
> On Thu, Feb 27, 2025 at 09:35:10AM -0800, Paul E. McKenney wrote:
> > On Thu, Feb 27, 2025 at 09:29:49AM -0500, Steven Rostedt wrote:
> > > On Thu, 27 Feb 2025 07:56:47 +0100
> > > Martin Uecker <uecker@tugraz.at> wrote:
> > >
> > > > Observable is I/O and volatile accesses. These are things considered
> > > > observable from the outside of a process and the only things an
> > > > optimizer has to preserve.
> > > >
> > > > Visibility is related to when stores are visible to other threads of
> > > > the same process. But this is just an internal concept to give
> > > > evaluation of expressions semantics in a multi-threaded
> > > > program when objects are accessed from different threads. But
> > > > the compiler is free to change any aspect of it, as long as the
> > > > observable behavior stays the same.
> > > >
> > > > In practice the difference is not so big for a traditional
> > > > optimizer that only has a limited local view and where
> > > > "another thread" is basically part of the "outside world".
> > >
> > > So basically you are saying that if the compiler has access to the entire
> > > program (sees the use cases for variables in all threads) that it can
> > > determine what is visible to other threads and what is not, and optimize
> > > accordingly?
> > >
> > > Like LTO in the kernel?
> >
> > LTO is a small step in that direction. In the most extreme case, the
> > compiler simply takes a quick glance at the code and the input data and
> > oracularly generates the output.
> >
> > Which is why my arguments against duplicating atomic loads have been
> > based on examples where doing so breaks basic arithmetic. :-/
>
> Please tell me that wasn't something that seriously needed to be said...
You are really asking me to lie to you? ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 18:33 ` Ralf Jung
@ 2025-02-27 19:15 ` Linus Torvalds
2025-02-27 19:55 ` Kent Overstreet
2025-02-28 7:53 ` Ralf Jung
0 siblings, 2 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-27 19:15 UTC (permalink / raw)
To: Ralf Jung
Cc: Kent Overstreet, Martin Uecker, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, 27 Feb 2025 at 10:33, Ralf Jung <post@ralfj.de> wrote:
>
> The way you do global flags in Rust is like this:
Note that I was really talking mainly about the unsafe cases, an din
particular when interfacing with C code.
Also, honestly:
> FLAG.store(true, Ordering::SeqCst); // or release/acquire/relaxed
I suspect in reality it would be hidden as accessor functions, or
people just continue to write things in C.
Yes, I know all about the C++ memory ordering. It's not only a
standards mess, it's all very illegible code too.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 17:58 ` Miguel Ojeda
@ 2025-02-27 19:25 ` Ventura Jack
0 siblings, 0 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-27 19:25 UTC (permalink / raw)
To: Miguel Ojeda
Cc: Ralf Jung, Kent Overstreet, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Thu, Feb 27, 2025 at 10:59 AM Miguel Ojeda
<miguel.ojeda.sandonis@gmail.com> wrote:
>
> On Thu, Feb 27, 2025 at 6:34 PM Ventura Jack <venturajack85@gmail.com> wrote:
> >
> > I have seen some Rust proponents literally say that there is
> > a specification for Rust, and that it is called rustc/LLVM.
> > Though those specific individuals may not have been the
> > most credible individuals.
>
> These "Some say..." arguments are not really useful, to be honest.
I disagree, I think they are fine to mention, especially
if I add any necessary and relevant caveats.
> > A fear I have is that there may be hidden reliance in
> > multiple different ways on LLVM, as well as on rustc.
> > Maybe even very deeply so. The complexity of Rust's
> > type system and rustc's type system checking makes
> > me more worried about this point. If there are hidden
> > elements, they may turn out to be very difficult to fix,
> > especially if they are discovered to be fundamental.
>
> If you have concrete concerns (apart from the ones you already raised
> so far which are not really applicable), please explain them.
>
> Otherwise, this sounds a bit like an appeal to fear, sorry.
But the concrete concerns I raised are applicable, I am
very sorry, but you are wrong on this point as far as I can tell.
And others also have fears in some related topics. Like the
example I mentioned later in the email.
>>[Omitted] several
>> issues are labeled with "S-fear".
>>
>> https://github.com/lcnr/solver-woes/issues
Do you have any thoughts on those issues labeled
with "S-fear"?
And the argument makes logical sense. And Ralf Jung
did discuss the issues of osssification and risk of
overfitting.
I am convinced that succeeding in having at least
two major Rust compilers, gccrs being the most
promising second one AFAIK, will be helpful directly, and
also indirectly allay some concerns that some people have.
> > You mention ossifying, but the more popular Rust becomes,
> > the more painful breakage will be, and the less suited
> > Rust will be as a research language.
>
> Rust is not a research language -- I guess you may be including
> features that are not promised to be stable, but that means even C
> would a research language... :)
I have heard others describe Rust as experimental,
and used that as one justification for not adopting
Rust. On the other hand, companies like Amazon
Web Services have lots of employed Rust developers,
AWS more than 300, and Rust is probably among the
20 most used programming languages. Comparable
in usage to Scala AFAIK, if for instance Redmonk's
rankings are used.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 19:06 ` Ventura Jack
@ 2025-02-27 19:45 ` Ralf Jung
2025-02-27 20:22 ` Kent Overstreet
2025-02-28 20:41 ` Ventura Jack
0 siblings, 2 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-27 19:45 UTC (permalink / raw)
To: Ventura Jack
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
Hi,
>>> If C was willing to break code as much as Rust, it would be easier to
>>> clean up C.
>>
>> Is that true? Gcc updates do break code.
>
> Surely not as much as Rust, right? From what I hear from users
> of Rust and of C, some Rust developers complain about
> Rust breaking a lot and being unstable, while I instead
> hear complaints about C and C++ being unwilling to break
> compatibility.
Stable Rust code hardly ever breaks on a compiler update. I don't know which
users you are talking about here, and it's hard to reply anything concrete to
such a vague claim that you are making here. I also "hear" lots of things, but
we shouldn't treat hear-say as facts.
*Nightly* Rust features do break regularly, but nobody has any right to complain
about that -- nightly Rust is the playground for experimenting with features
that we know are no ready yet.
> Rust does admittedly a lot of the time have tools to
> mitigate it, but Rust sometimes go beyond that.
> C code from 20 years ago can often be compiled
> without modification on a new compiler, that is a common
> experience I hear about. While I do not know if that
> would hold true for Rust code. Though Rust has editions.
Well, it is true that Rust code from 20 years ago cannot be compiled on today's
compiler any more. ;) But please do not spread FUD, and instead stick to
verifiable claims or cite some reasonable sources.
> The time crate breaking example above does not
> seem nice.
The time issue is like the biggest such issue we had ever, and indeed that did
not go well. We should have given the ecosystem more time to update to newer
versions of the time crate, which would have largely mitigated the impact of
this. A mistake was made, and a *lot* of internal discussion followed to
minimize the chance of this happening again. I hope you don't take that accident
as being representative of regular Rust development.
Kind regards,
Ralf
>
>>> A fear I have is that there may be hidden reliance in
>>> multiple different ways on LLVM, as well as on rustc.
>>> Maybe even very deeply so. The complexity of Rust's
>>> type system and rustc's type system checking makes
>>> me more worried about this point. If there are hidden
>>> elements, they may turn out to be very difficult to fix,
>>> especially if they are discovered to be fundamental.
>>> While having one compiler can be an advantage in
>>> some ways, it can arguably be a disadvantage
>>> in some other ways, as you acknowledge as well
>>> if I understand you correctly.
>>
>> The Rust type system has absolutely nothing to do with LLVM. Those are
>> completely separate parts of the compiler. So I don't see any way that LLVM
>> could possibly influence our type system.
>
> Sorry for the ambiguity, I packed too much different
> information into the same block.
>
>>> You mention ossifying, but the more popular Rust becomes,
>>> the more painful breakage will be, and the less suited
>>> Rust will be as a research language.
>>
>> I do not consider Rust a research language. :)
>
> It reminds me of Scala, in some ways, and some complained
> about Scala having too much of a research and experimental
> focus. I have heard similar complaints about Rust being
> too experimental, and that was part of why they did not
> wish to adopt it in some organizations. On the other hand,
> Amazon Web Services and other companies already
> use Rust extensively. AWS might have more than 300
> Rust developer employed. The more usage and code,
> the more painful breaking changes might be.
>
>>> I hope that any new language at least has its
>>> language developers ensure that they have a type
>>> system that is formalized and proven correct
>>> before that langauge's 1.0 release.
>>> Since fixing a type system later can be difficult or
>>> practically impossible. A complex type system
>>> and complex type checking can be a larger risk in this
>>> regard relative to a simple type system and simple
>>> type checking, especially the more time passes and
>>> the more the language is used and have code
>>> written in it, making it more difficult to fix the language
>>> due to code breakage costing more.
>>
>> Uff, that's a very high bar to pass.^^ I think there's maybe two languages ever
>> that meet this bar? SML and wasm.
>
> You may be right about the bar being too high.
> I would have hoped that it would be easier to achieve
> with modern programming language research and
> advances.
>
>>>>> There are some issues in Rust that I am curious as to
>>>>> your views on. rustc or the Rust language has some type
>>>>> system holes, which still causes problems for rustc and
>>>>> their developers.
>>>>>
>>>>> https://github.com/lcnr/solver-woes/issues/1
>>>>> https://github.com/rust-lang/rust/issues/75992
>>>>>
>>>>> Those kinds of issues seem difficult to solve.
>>>>>
>>>>> In your opinion, is it accurate to say that the Rust language
>>>>> developers are working on a new type system for
>>>>> Rust-the-language and a new solver for rustc, and that
>>>>> they are trying to make the new type system and new solver
>>>>> as backwards compatible as possible?
>>>>
>>>> It's not really a new type system. It's a new implementation for the same type
>>>> system. But yes there is work on a new "solver" (that I am not involved in) that
>>>> should finally fix some of the long-standing type system bugs. Specifically,
>>>> this is a "trait solver", i.e. it is the component responsible for dealing with
>>>> trait constraints. Due to some unfortunate corner-case behaviors of the old,
>>>> organically grown solver, it's very hard to do this in a backwards-compatible
>>>> way, but we have infrastructure for extensive ecosystem-wide testing to judge
>>>> the consequences of any given potential breaking change and ensure that almost
>>>> all existing code keeps working. In fact, Rust 1.84 already started using the
>>>> new solver for some things
>>>> (https://blog.rust-lang.org/2025/01/09/Rust-1.84.0.html) -- did you notice?
>>>> Hopefully not. :)
>>>
>>> If it is not a new type system, why then do they talk about
>>> backwards compatibility for existing Rust projects?
>>
>> If you make a tiny change to a type system, is it a "new type system"? "new type
>> system" sounds like "from-scratch redesign". That's not what happens.
>
> I can see your point, but a different type system would be
> different. It may be a matter of definition. In practice, the
> significance and consequences would arguably depend on
> how much backwards compatibility it has, and how many and
> how much existing projects are broken.
>
> So far, it appears to require a lot of work and effort for
> some of the Rust language developers, and my impression
> at a glance is that they have significant expertise, yet have
> worked on it for years.
>
> Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 19:15 ` Linus Torvalds
@ 2025-02-27 19:55 ` Kent Overstreet
2025-02-27 20:28 ` Linus Torvalds
2025-02-28 7:53 ` Ralf Jung
1 sibling, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-27 19:55 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ralf Jung, Martin Uecker, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, Feb 27, 2025 at 11:15:54AM -0800, Linus Torvalds wrote:
> On Thu, 27 Feb 2025 at 10:33, Ralf Jung <post@ralfj.de> wrote:
> >
> > The way you do global flags in Rust is like this:
>
> Note that I was really talking mainly about the unsafe cases, an din
> particular when interfacing with C code.
For simple bitflags (i.e. code where we use test_bit()/set_bit() we'd
probably just export it as a standard Rust atomic, no new unsafe {}
required.
>
> Also, honestly:
>
> > FLAG.store(true, Ordering::SeqCst); // or release/acquire/relaxed
>
> I suspect in reality it would be hidden as accessor functions, or
> people just continue to write things in C.
>
> Yes, I know all about the C++ memory ordering. It's not only a
> standards mess, it's all very illegible code too.
It's more explicit, and that's probably not a bad thing - compare it to
our smp_mb__after_atomic(), it's not uncommon to find code where the
barriers are missing because the person who wrote the code was assuming
x86.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 19:45 ` Ralf Jung
@ 2025-02-27 20:22 ` Kent Overstreet
2025-02-27 22:18 ` David Laight
2025-02-28 20:41 ` Ventura Jack
1 sibling, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-27 20:22 UTC (permalink / raw)
To: Ralf Jung
Cc: Ventura Jack, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Thu, Feb 27, 2025 at 08:45:09PM +0100, Ralf Jung wrote:
> Hi,
>
> > > > If C was willing to break code as much as Rust, it would be easier to
> > > > clean up C.
> > >
> > > Is that true? Gcc updates do break code.
> >
> > Surely not as much as Rust, right? From what I hear from users
> > of Rust and of C, some Rust developers complain about
> > Rust breaking a lot and being unstable, while I instead
> > hear complaints about C and C++ being unwilling to break
> > compatibility.
>
> Stable Rust code hardly ever breaks on a compiler update. I don't know which
> users you are talking about here, and it's hard to reply anything concrete
> to such a vague claim that you are making here. I also "hear" lots of
> things, but we shouldn't treat hear-say as facts.
> *Nightly* Rust features do break regularly, but nobody has any right to
> complain about that -- nightly Rust is the playground for experimenting with
> features that we know are no ready yet.
It's also less important to avoid ever breaking working code than it was
20 years ago: more of the code we care about is open source, everyone is
using source control, and with so much code on crates.io it's now
possible to check what the potential impact would be.
This is a good thing as long as it's done judiciously, to evolve the
language towards stronger semantics and fix safety issues in the
cleanest way when found.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 19:55 ` Kent Overstreet
@ 2025-02-27 20:28 ` Linus Torvalds
0 siblings, 0 replies; 194+ messages in thread
From: Linus Torvalds @ 2025-02-27 20:28 UTC (permalink / raw)
To: Kent Overstreet
Cc: Ralf Jung, Martin Uecker, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, 27 Feb 2025 at 11:55, Kent Overstreet <kent.overstreet@linux.dev> wrote:
>
> It's more explicit, and that's probably not a bad thing - compare it to
> our smp_mb__after_atomic(), it's not uncommon to find code where the
> barriers are missing because the person who wrote the code was assuming
> x86.
Sadly, judging by the memory ordering discussions I saw, I will almost
guarantee you that the compiler support for memory ordering will be
buggy.
When we miss details in our wrappers or our users, we can fix them.
And when the compilers mess up, we'll use the wrappers anyway.
Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 22:35 ` Steven Rostedt
2025-02-26 23:18 ` Linus Torvalds
@ 2025-02-27 20:47 ` David Laight
2025-02-27 21:33 ` Steven Rostedt
` (2 more replies)
1 sibling, 3 replies; 194+ messages in thread
From: David Laight @ 2025-02-27 20:47 UTC (permalink / raw)
To: Steven Rostedt
Cc: Linus Torvalds, Martin Uecker, Ralf Jung, Paul E. McKenney,
Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo, airlied,
boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Wed, 26 Feb 2025 17:35:34 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> On Wed, 26 Feb 2025 14:22:26 -0800
> Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
> > > But if I used:
> > >
> > > if (global > 1000)
> > > goto out;
> > > x = global;
> >
> > which can have the TUCTOU issue because 'global' is read twice.
>
> Correct, but if the variable had some other protection, like a lock held
> when this function was called, it is fine to do and the compiler may
> optimize it or not and still have the same result.
>
> I guess you can sum this up to:
>
> The compiler should never assume it's safe to read a global more than the
> code specifies, but if the code reads a global more than once, it's fine
> to cache the multiple reads.
>
> Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
> And when I do use it, it is more to prevent write tearing as you mentioned.
Except that (IIRC) it is actually valid for the compiler to write something
entirely unrelated to a memory location before writing the expected value.
(eg use it instead of stack for a register spill+reload.)
Not gcc doesn't do that - but the standard lets it do it.
David
>
> -- Steve
>
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 20:47 ` David Laight
@ 2025-02-27 21:33 ` Steven Rostedt
2025-02-28 21:29 ` Paul E. McKenney
2025-02-27 21:41 ` Paul E. McKenney
2025-02-28 7:44 ` Ralf Jung
2 siblings, 1 reply; 194+ messages in thread
From: Steven Rostedt @ 2025-02-27 21:33 UTC (permalink / raw)
To: David Laight
Cc: Linus Torvalds, Martin Uecker, Ralf Jung, Paul E. McKenney,
Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo, airlied,
boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, 27 Feb 2025 20:47:22 +0000
David Laight <david.laight.linux@gmail.com> wrote:
> Except that (IIRC) it is actually valid for the compiler to write something
> entirely unrelated to a memory location before writing the expected value.
> (eg use it instead of stack for a register spill+reload.)
> Not gcc doesn't do that - but the standard lets it do it.
I call that a bug in the specification ;-)
-- Steve
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 20:47 ` David Laight
2025-02-27 21:33 ` Steven Rostedt
@ 2025-02-27 21:41 ` Paul E. McKenney
2025-02-27 22:20 ` David Laight
2025-02-28 7:44 ` Ralf Jung
2 siblings, 1 reply; 194+ messages in thread
From: Paul E. McKenney @ 2025-02-27 21:41 UTC (permalink / raw)
To: David Laight
Cc: Steven Rostedt, Linus Torvalds, Martin Uecker, Ralf Jung,
Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo, airlied,
boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, Feb 27, 2025 at 08:47:22PM +0000, David Laight wrote:
> On Wed, 26 Feb 2025 17:35:34 -0500
> Steven Rostedt <rostedt@goodmis.org> wrote:
>
> > On Wed, 26 Feb 2025 14:22:26 -0800
> > Linus Torvalds <torvalds@linux-foundation.org> wrote:
> >
> > > > But if I used:
> > > >
> > > > if (global > 1000)
> > > > goto out;
> > > > x = global;
> > >
> > > which can have the TUCTOU issue because 'global' is read twice.
> >
> > Correct, but if the variable had some other protection, like a lock held
> > when this function was called, it is fine to do and the compiler may
> > optimize it or not and still have the same result.
> >
> > I guess you can sum this up to:
> >
> > The compiler should never assume it's safe to read a global more than the
> > code specifies, but if the code reads a global more than once, it's fine
> > to cache the multiple reads.
> >
> > Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
> > And when I do use it, it is more to prevent write tearing as you mentioned.
>
> Except that (IIRC) it is actually valid for the compiler to write something
> entirely unrelated to a memory location before writing the expected value.
> (eg use it instead of stack for a register spill+reload.)
> Not gcc doesn't do that - but the standard lets it do it.
Or replace a write with a read, a check, and a write only if the read
returns some other value than the one to be written. Also not something
I have seen, but something that the standard permits.
Thanx, Paul
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 20:22 ` Kent Overstreet
@ 2025-02-27 22:18 ` David Laight
2025-02-27 23:18 ` Kent Overstreet
0 siblings, 1 reply; 194+ messages in thread
From: David Laight @ 2025-02-27 22:18 UTC (permalink / raw)
To: Kent Overstreet
Cc: Ralf Jung, Ventura Jack, Miguel Ojeda, Gary Guo, torvalds,
airlied, boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
rust-for-linux
On Thu, 27 Feb 2025 15:22:20 -0500
Kent Overstreet <kent.overstreet@linux.dev> wrote:
> On Thu, Feb 27, 2025 at 08:45:09PM +0100, Ralf Jung wrote:
> > Hi,
> >
> > > > > If C was willing to break code as much as Rust, it would be easier to
> > > > > clean up C.
> > > >
> > > > Is that true? Gcc updates do break code.
> > >
> > > Surely not as much as Rust, right? From what I hear from users
> > > of Rust and of C, some Rust developers complain about
> > > Rust breaking a lot and being unstable, while I instead
> > > hear complaints about C and C++ being unwilling to break
> > > compatibility.
> >
> > Stable Rust code hardly ever breaks on a compiler update. I don't know which
> > users you are talking about here, and it's hard to reply anything concrete
> > to such a vague claim that you are making here. I also "hear" lots of
> > things, but we shouldn't treat hear-say as facts.
> > *Nightly* Rust features do break regularly, but nobody has any right to
> > complain about that -- nightly Rust is the playground for experimenting with
> > features that we know are no ready yet.
>
> It's also less important to avoid ever breaking working code than it was
> 20 years ago: more of the code we care about is open source, everyone is
> using source control, and with so much code on crates.io it's now
> possible to check what the potential impact would be.
Do you really want to change something that would break the linux kernel?
Even a compile-time breakage would be a PITA.
And the kernel is small by comparison with some other projects.
Look at all the problems because python-3 was incompatible with python-2.
You have to maintain compatibility.
Now there are some things in C (like functions 'falling of the bottom
without returning a value') that could sensibly be changed from warnings
to errors, but you can't decide to fix the priority of the bitwise &.
David
>
> This is a good thing as long as it's done judiciously, to evolve the
> language towards stronger semantics and fix safety issues in the
> cleanest way when found.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 21:41 ` Paul E. McKenney
@ 2025-02-27 22:20 ` David Laight
2025-02-27 22:40 ` Paul E. McKenney
0 siblings, 1 reply; 194+ messages in thread
From: David Laight @ 2025-02-27 22:20 UTC (permalink / raw)
To: Paul E. McKenney
Cc: Steven Rostedt, Linus Torvalds, Martin Uecker, Ralf Jung,
Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo, airlied,
boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, 27 Feb 2025 13:41:15 -0800
"Paul E. McKenney" <paulmck@kernel.org> wrote:
> On Thu, Feb 27, 2025 at 08:47:22PM +0000, David Laight wrote:
> > On Wed, 26 Feb 2025 17:35:34 -0500
> > Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> > > On Wed, 26 Feb 2025 14:22:26 -0800
> > > Linus Torvalds <torvalds@linux-foundation.org> wrote:
> > >
> > > > > But if I used:
> > > > >
> > > > > if (global > 1000)
> > > > > goto out;
> > > > > x = global;
> > > >
> > > > which can have the TUCTOU issue because 'global' is read twice.
> > >
> > > Correct, but if the variable had some other protection, like a lock held
> > > when this function was called, it is fine to do and the compiler may
> > > optimize it or not and still have the same result.
> > >
> > > I guess you can sum this up to:
> > >
> > > The compiler should never assume it's safe to read a global more than the
> > > code specifies, but if the code reads a global more than once, it's fine
> > > to cache the multiple reads.
> > >
> > > Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
> > > And when I do use it, it is more to prevent write tearing as you mentioned.
> >
> > Except that (IIRC) it is actually valid for the compiler to write something
> > entirely unrelated to a memory location before writing the expected value.
> > (eg use it instead of stack for a register spill+reload.)
> > Not gcc doesn't do that - but the standard lets it do it.
>
> Or replace a write with a read, a check, and a write only if the read
> returns some other value than the one to be written. Also not something
> I have seen, but something that the standard permits.
Or if you write code that does that, assume it can just to the write.
So dirtying a cache line.
David
>
> Thanx, Paul
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 22:20 ` David Laight
@ 2025-02-27 22:40 ` Paul E. McKenney
0 siblings, 0 replies; 194+ messages in thread
From: Paul E. McKenney @ 2025-02-27 22:40 UTC (permalink / raw)
To: David Laight
Cc: Steven Rostedt, Linus Torvalds, Martin Uecker, Ralf Jung,
Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo, airlied,
boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, Feb 27, 2025 at 10:20:30PM +0000, David Laight wrote:
> On Thu, 27 Feb 2025 13:41:15 -0800
> "Paul E. McKenney" <paulmck@kernel.org> wrote:
>
> > On Thu, Feb 27, 2025 at 08:47:22PM +0000, David Laight wrote:
> > > On Wed, 26 Feb 2025 17:35:34 -0500
> > > Steven Rostedt <rostedt@goodmis.org> wrote:
> > >
> > > > On Wed, 26 Feb 2025 14:22:26 -0800
> > > > Linus Torvalds <torvalds@linux-foundation.org> wrote:
> > > >
> > > > > > But if I used:
> > > > > >
> > > > > > if (global > 1000)
> > > > > > goto out;
> > > > > > x = global;
> > > > >
> > > > > which can have the TUCTOU issue because 'global' is read twice.
> > > >
> > > > Correct, but if the variable had some other protection, like a lock held
> > > > when this function was called, it is fine to do and the compiler may
> > > > optimize it or not and still have the same result.
> > > >
> > > > I guess you can sum this up to:
> > > >
> > > > The compiler should never assume it's safe to read a global more than the
> > > > code specifies, but if the code reads a global more than once, it's fine
> > > > to cache the multiple reads.
> > > >
> > > > Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
> > > > And when I do use it, it is more to prevent write tearing as you mentioned.
> > >
> > > Except that (IIRC) it is actually valid for the compiler to write something
> > > entirely unrelated to a memory location before writing the expected value.
> > > (eg use it instead of stack for a register spill+reload.)
> > > Not gcc doesn't do that - but the standard lets it do it.
> >
> > Or replace a write with a read, a check, and a write only if the read
> > returns some other value than the one to be written. Also not something
> > I have seen, but something that the standard permits.
>
> Or if you write code that does that, assume it can just to the write.
> So dirtying a cache line.
You lost me on this one. I am talking about a case where this code:
x = 1;
gets optimized into something like this:
if (x != 1)
x = 1;
Which means that the "x != 1" could be re-ordered prior to an earlier
smp_wmb(), which might come as a surprise to code relying on that
ordering. :-(
Again, not something I have seen in the wild.
Thanx, Paul
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 22:18 ` David Laight
@ 2025-02-27 23:18 ` Kent Overstreet
2025-02-28 7:38 ` Ralf Jung
2025-02-28 20:48 ` Ventura Jack
0 siblings, 2 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-27 23:18 UTC (permalink / raw)
To: David Laight
Cc: Ralf Jung, Ventura Jack, Miguel Ojeda, Gary Guo, torvalds,
airlied, boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
rust-for-linux
On Thu, Feb 27, 2025 at 10:18:01PM +0000, David Laight wrote:
> On Thu, 27 Feb 2025 15:22:20 -0500
> Kent Overstreet <kent.overstreet@linux.dev> wrote:
>
> > On Thu, Feb 27, 2025 at 08:45:09PM +0100, Ralf Jung wrote:
> > > Hi,
> > >
> > > > > > If C was willing to break code as much as Rust, it would be easier to
> > > > > > clean up C.
> > > > >
> > > > > Is that true? Gcc updates do break code.
> > > >
> > > > Surely not as much as Rust, right? From what I hear from users
> > > > of Rust and of C, some Rust developers complain about
> > > > Rust breaking a lot and being unstable, while I instead
> > > > hear complaints about C and C++ being unwilling to break
> > > > compatibility.
> > >
> > > Stable Rust code hardly ever breaks on a compiler update. I don't know which
> > > users you are talking about here, and it's hard to reply anything concrete
> > > to such a vague claim that you are making here. I also "hear" lots of
> > > things, but we shouldn't treat hear-say as facts.
> > > *Nightly* Rust features do break regularly, but nobody has any right to
> > > complain about that -- nightly Rust is the playground for experimenting with
> > > features that we know are no ready yet.
> >
> > It's also less important to avoid ever breaking working code than it was
> > 20 years ago: more of the code we care about is open source, everyone is
> > using source control, and with so much code on crates.io it's now
> > possible to check what the potential impact would be.
>
> Do you really want to change something that would break the linux kernel?
> Even a compile-time breakage would be a PITA.
> And the kernel is small by comparison with some other projects.
>
> Look at all the problems because python-3 was incompatible with python-2.
> You have to maintain compatibility.
Those were big breaks.
In rust there's only ever little, teeny tiny breaks to address soundness
issues, and they've been pretty small and localized.
If it did ever came up the kernel would be patched to fix in advance
whatever behaviour the compiler is being changed to fix (and that'd get
backported to stable trees as well, if necessary).
It's not likely to ever come up since we're not using stdlib, and they
won't want to break behaviour for us if at all possible.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 23:18 ` Kent Overstreet
@ 2025-02-28 7:38 ` Ralf Jung
2025-02-28 20:48 ` Ventura Jack
1 sibling, 0 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-28 7:38 UTC (permalink / raw)
To: Kent Overstreet, David Laight
Cc: Ventura Jack, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
rust-for-linux
Hi,
>>> It's also less important to avoid ever breaking working code than it was
>>> 20 years ago: more of the code we care about is open source, everyone is
>>> using source control, and with so much code on crates.io it's now
>>> possible to check what the potential impact would be.
>>
>> Do you really want to change something that would break the linux kernel?
>> Even a compile-time breakage would be a PITA.
>> And the kernel is small by comparison with some other projects.
>>
>> Look at all the problems because python-3 was incompatible with python-2.
>> You have to maintain compatibility.
>
> Those were big breaks.
>
> In rust there's only ever little, teeny tiny breaks to address soundness
> issues, and they've been pretty small and localized.
>
> If it did ever came up the kernel would be patched to fix in advance
> whatever behaviour the compiler is being changed to fix (and that'd get
> backported to stable trees as well, if necessary).
We actually had just such a case this month: the way the kernel disabled FP
support on aarch64 turned out to be a possible source of soundness issues, so
rustc started warning about that. Before this warning even hit stable Rust,
there's already a patch in the kernel to disable FP support in a less
problematic way (thus avoiding the warning), and this has been backported.
<https://lore.kernel.org/lkml/20250210163732.281786-1-ojeda@kernel.org/>
We'll wait at least a few more months before we turn this warning into a hard error.
> It's not likely to ever come up since we're not using stdlib, and they
> won't want to break behaviour for us if at all possible.
Note however that the kernel does use some unstable features, so the risk of
breakage is higher than for typical stable Rust code. That said, you all get
special treatment in our CI, and the Rust for Linux maintainers are in good
contact with the Rust project, so we'll know about the breakage in advance and
can prepare the kernel sources for whatever changes in rustc are coming.
Hopefully the number of nightly features used in the kernel can slowly be
reduced to 0 and then this will be much less of a concern. :)
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 20:47 ` David Laight
2025-02-27 21:33 ` Steven Rostedt
2025-02-27 21:41 ` Paul E. McKenney
@ 2025-02-28 7:44 ` Ralf Jung
2025-02-28 15:41 ` Kent Overstreet
2 siblings, 1 reply; 194+ messages in thread
From: Ralf Jung @ 2025-02-28 7:44 UTC (permalink / raw)
To: David Laight, Steven Rostedt
Cc: Linus Torvalds, Martin Uecker, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Kent Overstreet, Gary Guo, airlied, boqun.feng, ej,
gregkh, hch, hpa, ksummit, linux-kernel, miguel.ojeda.sandonis,
rust-for-linux
Hi,
>> I guess you can sum this up to:
>>
>> The compiler should never assume it's safe to read a global more than the
>> code specifies, but if the code reads a global more than once, it's fine
>> to cache the multiple reads.
>>
>> Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
>> And when I do use it, it is more to prevent write tearing as you mentioned.
>
> Except that (IIRC) it is actually valid for the compiler to write something
> entirely unrelated to a memory location before writing the expected value.
> (eg use it instead of stack for a register spill+reload.)
> Not gcc doesn't do that - but the standard lets it do it.
Whether the compiler is permitted to do that depends heavily on what exactly the
code looks like, so it's hard to discuss this in the abstract.
If inside some function, *all* writes to a given location are atomic (I think
that's what you call WRITE_ONCE?), then the compiler is *not* allowed to invent
any new writes to that memory. The compiler has to assume that there might be
concurrent reads from other threads, whose behavior could change from the extra
compiler-introduced writes. The spec (in C, C++, and Rust) already works like that.
OTOH, the moment you do a single non-atomic write (i.e., a regular "*ptr = val;"
or memcpy or so), that is a signal to the compiler that there cannot be any
concurrent accesses happening at the moment, and therefore it can (and likely
will) introduce extra writes to that memory.
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 19:15 ` Linus Torvalds
2025-02-27 19:55 ` Kent Overstreet
@ 2025-02-28 7:53 ` Ralf Jung
1 sibling, 0 replies; 194+ messages in thread
From: Ralf Jung @ 2025-02-28 7:53 UTC (permalink / raw)
To: Linus Torvalds
Cc: Kent Overstreet, Martin Uecker, Paul E. McKenney, Alice Ryhl,
Ventura Jack, Gary Guo, airlied, boqun.feng, david.laight.linux,
ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
Hi,
On 27.02.25 20:15, Linus Torvalds wrote:
> On Thu, 27 Feb 2025 at 10:33, Ralf Jung <post@ralfj.de> wrote:
>>
>> The way you do global flags in Rust is like this:
>
> Note that I was really talking mainly about the unsafe cases, an din
> particular when interfacing with C code.
When Rust code and C code share memory that is concurrently accessed, all
accesses to that from the Rust side must be explicitly marked as atomic. A
pointer to such a memory should look like `&AtomicBool` in Rust, not `*mut
bool`. To my knowledge, the kernel already has appropriate APIs for that. That
will then ensure things behave like the AtomicBool example.
Kind regards,
Ralf
>
> Also, honestly:
>
>> FLAG.store(true, Ordering::SeqCst); // or release/acquire/relaxed
>
> I suspect in reality it would be hidden as accessor functions, or
> people just continue to write things in C.
>
> Yes, I know all about the C++ memory ordering. It's not only a
> standards mess, it's all very illegible code too.
>
> Linus
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 20:00 ` Martin Uecker
2025-02-26 21:14 ` Linus Torvalds
2025-02-27 14:21 ` Ventura Jack
@ 2025-02-28 8:08 ` Ralf Jung
2025-02-28 8:32 ` Martin Uecker
2 siblings, 1 reply; 194+ messages in thread
From: Ralf Jung @ 2025-02-28 8:08 UTC (permalink / raw)
To: Martin Uecker, Linus Torvalds, Paul E. McKenney
Cc: Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
Hi,
>> The reason? The standards people wanted to describe the memory model
>> not at a "this is what the program does" level, but at the "this is
>> the type system and the syntactic rules" level. So the RCU accesses
>> had to be defined in terms of the type system, but the actual language
>> rules for the RCU accesses are about how the data is then used after
>> the load.
>
> If your point is that this should be phrased in terms of atomic
> accesses instead of accesses to atomic objects, then I absolutely
> agree with you. This is something I tried to get fixed, but it
> is difficult. The concurrency work mostly happens in WG21
> and not WG14.
>
> But still, the fundamental definition of the model is in terms
> of accesses and when those become visible to other threads, and
> not in terms of syntax and types.
The underlying C++ memory model is already fully defined in terms of "this is
what the program does", and it works in terms of atomic accesses, not atomic
objects. The atomic objects are a thin layer that the C++ type system puts on
top, and it can be ignored -- that's how we do it in Rust.
(From a different email)
> It sounds you want to see the semantics strengthened in case
> of a data race from there being UB to having either the old
> or new value being visible to another thread, where at some
> point this could change but needs to be consistent for a
> single access as expressed in the source code.
This would definitely impact optimizations of purely sequential code. Maybe that
is a price worth paying, but one of the goals of the C++ model was that if you
don't use threads, you shouldn't pay for them. Disallowing rematerialization in
entirely sequential code (just one of the likely many consequences of making
data races not UB) contradicts that goal. Given that even in highly concurrent
programs, most accesses are entirely sequential, it doesn't seem unreasonable to
say that the exceptional case needs to be marked in the program (especially if
you have a type system which helps ensure that you don't forget to do so).
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-28 8:08 ` Ralf Jung
@ 2025-02-28 8:32 ` Martin Uecker
0 siblings, 0 replies; 194+ messages in thread
From: Martin Uecker @ 2025-02-28 8:32 UTC (permalink / raw)
To: Ralf Jung, Linus Torvalds, Paul E. McKenney
Cc: Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, miguel.ojeda.sandonis, rust-for-linux
Am Freitag, dem 28.02.2025 um 09:08 +0100 schrieb Ralf Jung:
>
> (From a different email)
> > It sounds you want to see the semantics strengthened in case
> > of a data race from there being UB to having either the old
> > or new value being visible to another thread, where at some
> > point this could change but needs to be consistent for a
> > single access as expressed in the source code.
>
> This would definitely impact optimizations of purely sequential code. Maybe that
> is a price worth paying, but one of the goals of the C++ model was that if you
> don't use threads, you shouldn't pay for them. Disallowing rematerialization in
> entirely sequential code (just one of the likely many consequences of making
> data races not UB) contradicts that goal.
This is the feedback I now also got from GCC, i.e. there are cases where
register allocator would indeed rematerialize a load and they think this is
reasonable.
> Given that even in highly concurrent
> programs, most accesses are entirely sequential, it doesn't seem unreasonable to
> say that the exceptional case needs to be marked in the program (especially if
> you have a type system which helps ensure that you don't forget to do so).
Martin
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-28 7:44 ` Ralf Jung
@ 2025-02-28 15:41 ` Kent Overstreet
2025-02-28 15:46 ` Boqun Feng
2025-03-04 18:12 ` Ralf Jung
0 siblings, 2 replies; 194+ messages in thread
From: Kent Overstreet @ 2025-02-28 15:41 UTC (permalink / raw)
To: Ralf Jung
Cc: David Laight, Steven Rostedt, Linus Torvalds, Martin Uecker,
Paul E. McKenney, Alice Ryhl, Ventura Jack, Gary Guo, airlied,
boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Fri, Feb 28, 2025 at 08:44:58AM +0100, Ralf Jung wrote:
> Hi,
>
> > > I guess you can sum this up to:
> > >
> > > The compiler should never assume it's safe to read a global more than the
> > > code specifies, but if the code reads a global more than once, it's fine
> > > to cache the multiple reads.
> > >
> > > Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
> > > And when I do use it, it is more to prevent write tearing as you mentioned.
> >
> > Except that (IIRC) it is actually valid for the compiler to write something
> > entirely unrelated to a memory location before writing the expected value.
> > (eg use it instead of stack for a register spill+reload.)
> > Not gcc doesn't do that - but the standard lets it do it.
>
> Whether the compiler is permitted to do that depends heavily on what exactly
> the code looks like, so it's hard to discuss this in the abstract.
> If inside some function, *all* writes to a given location are atomic (I
> think that's what you call WRITE_ONCE?), then the compiler is *not* allowed
> to invent any new writes to that memory. The compiler has to assume that
> there might be concurrent reads from other threads, whose behavior could
> change from the extra compiler-introduced writes. The spec (in C, C++, and
> Rust) already works like that.
>
> OTOH, the moment you do a single non-atomic write (i.e., a regular "*ptr =
> val;" or memcpy or so), that is a signal to the compiler that there cannot
> be any concurrent accesses happening at the moment, and therefore it can
> (and likely will) introduce extra writes to that memory.
Is that how it really works?
I'd expect the atomic writes to have what we call "compiler barriers"
before and after; IOW, the compiler can do whatever it wants with non
atomic writes, provided it doesn't cross those barriers.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-28 15:41 ` Kent Overstreet
@ 2025-02-28 15:46 ` Boqun Feng
2025-02-28 16:04 ` Kent Overstreet
2025-03-04 18:12 ` Ralf Jung
1 sibling, 1 reply; 194+ messages in thread
From: Boqun Feng @ 2025-02-28 15:46 UTC (permalink / raw)
To: Kent Overstreet
Cc: Ralf Jung, David Laight, Steven Rostedt, Linus Torvalds,
Martin Uecker, Paul E. McKenney, Alice Ryhl, Ventura Jack,
Gary Guo, airlied, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Fri, Feb 28, 2025 at 10:41:12AM -0500, Kent Overstreet wrote:
> On Fri, Feb 28, 2025 at 08:44:58AM +0100, Ralf Jung wrote:
> > Hi,
> >
> > > > I guess you can sum this up to:
> > > >
> > > > The compiler should never assume it's safe to read a global more than the
> > > > code specifies, but if the code reads a global more than once, it's fine
> > > > to cache the multiple reads.
> > > >
> > > > Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
> > > > And when I do use it, it is more to prevent write tearing as you mentioned.
> > >
> > > Except that (IIRC) it is actually valid for the compiler to write something
> > > entirely unrelated to a memory location before writing the expected value.
> > > (eg use it instead of stack for a register spill+reload.)
> > > Not gcc doesn't do that - but the standard lets it do it.
> >
> > Whether the compiler is permitted to do that depends heavily on what exactly
> > the code looks like, so it's hard to discuss this in the abstract.
> > If inside some function, *all* writes to a given location are atomic (I
> > think that's what you call WRITE_ONCE?), then the compiler is *not* allowed
> > to invent any new writes to that memory. The compiler has to assume that
> > there might be concurrent reads from other threads, whose behavior could
> > change from the extra compiler-introduced writes. The spec (in C, C++, and
> > Rust) already works like that.
> >
> > OTOH, the moment you do a single non-atomic write (i.e., a regular "*ptr =
> > val;" or memcpy or so), that is a signal to the compiler that there cannot
> > be any concurrent accesses happening at the moment, and therefore it can
> > (and likely will) introduce extra writes to that memory.
>
> Is that how it really works?
>
> I'd expect the atomic writes to have what we call "compiler barriers"
> before and after; IOW, the compiler can do whatever it wants with non
If the atomic writes are relaxed, they shouldn't have "compiler
barriers" before or after, e.g. our kernel atomics don't have such
compiler barriers. And WRITE_ONCE() is basically relaxed atomic writes.
Regards,
Boqun
> atomic writes, provided it doesn't cross those barriers.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-28 15:46 ` Boqun Feng
@ 2025-02-28 16:04 ` Kent Overstreet
2025-02-28 16:13 ` Boqun Feng
0 siblings, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-28 16:04 UTC (permalink / raw)
To: Boqun Feng
Cc: Ralf Jung, David Laight, Steven Rostedt, Linus Torvalds,
Martin Uecker, Paul E. McKenney, Alice Ryhl, Ventura Jack,
Gary Guo, airlied, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Fri, Feb 28, 2025 at 07:46:23AM -0800, Boqun Feng wrote:
> On Fri, Feb 28, 2025 at 10:41:12AM -0500, Kent Overstreet wrote:
> > On Fri, Feb 28, 2025 at 08:44:58AM +0100, Ralf Jung wrote:
> > > Hi,
> > >
> > > > > I guess you can sum this up to:
> > > > >
> > > > > The compiler should never assume it's safe to read a global more than the
> > > > > code specifies, but if the code reads a global more than once, it's fine
> > > > > to cache the multiple reads.
> > > > >
> > > > > Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
> > > > > And when I do use it, it is more to prevent write tearing as you mentioned.
> > > >
> > > > Except that (IIRC) it is actually valid for the compiler to write something
> > > > entirely unrelated to a memory location before writing the expected value.
> > > > (eg use it instead of stack for a register spill+reload.)
> > > > Not gcc doesn't do that - but the standard lets it do it.
> > >
> > > Whether the compiler is permitted to do that depends heavily on what exactly
> > > the code looks like, so it's hard to discuss this in the abstract.
> > > If inside some function, *all* writes to a given location are atomic (I
> > > think that's what you call WRITE_ONCE?), then the compiler is *not* allowed
> > > to invent any new writes to that memory. The compiler has to assume that
> > > there might be concurrent reads from other threads, whose behavior could
> > > change from the extra compiler-introduced writes. The spec (in C, C++, and
> > > Rust) already works like that.
> > >
> > > OTOH, the moment you do a single non-atomic write (i.e., a regular "*ptr =
> > > val;" or memcpy or so), that is a signal to the compiler that there cannot
> > > be any concurrent accesses happening at the moment, and therefore it can
> > > (and likely will) introduce extra writes to that memory.
> >
> > Is that how it really works?
> >
> > I'd expect the atomic writes to have what we call "compiler barriers"
> > before and after; IOW, the compiler can do whatever it wants with non
>
> If the atomic writes are relaxed, they shouldn't have "compiler
> barriers" before or after, e.g. our kernel atomics don't have such
> compiler barriers. And WRITE_ONCE() is basically relaxed atomic writes.
Then perhaps we need a better definition of ATOMIC_RELAXED?
I've always taken ATOMIC_RELAXED to mean "may be reordered with accesses
to other memory locations". What you're describing seems likely to cause
problems.
e.g. if you allocate a struct, memset() it to zero it out, then publish
it, then do a WRITE_ONCE()...
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-28 16:04 ` Kent Overstreet
@ 2025-02-28 16:13 ` Boqun Feng
2025-02-28 16:21 ` Kent Overstreet
0 siblings, 1 reply; 194+ messages in thread
From: Boqun Feng @ 2025-02-28 16:13 UTC (permalink / raw)
To: Kent Overstreet
Cc: Ralf Jung, David Laight, Steven Rostedt, Linus Torvalds,
Martin Uecker, Paul E. McKenney, Alice Ryhl, Ventura Jack,
Gary Guo, airlied, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Fri, Feb 28, 2025 at 11:04:28AM -0500, Kent Overstreet wrote:
> On Fri, Feb 28, 2025 at 07:46:23AM -0800, Boqun Feng wrote:
> > On Fri, Feb 28, 2025 at 10:41:12AM -0500, Kent Overstreet wrote:
> > > On Fri, Feb 28, 2025 at 08:44:58AM +0100, Ralf Jung wrote:
> > > > Hi,
> > > >
> > > > > > I guess you can sum this up to:
> > > > > >
> > > > > > The compiler should never assume it's safe to read a global more than the
> > > > > > code specifies, but if the code reads a global more than once, it's fine
> > > > > > to cache the multiple reads.
> > > > > >
> > > > > > Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
> > > > > > And when I do use it, it is more to prevent write tearing as you mentioned.
> > > > >
> > > > > Except that (IIRC) it is actually valid for the compiler to write something
> > > > > entirely unrelated to a memory location before writing the expected value.
> > > > > (eg use it instead of stack for a register spill+reload.)
> > > > > Not gcc doesn't do that - but the standard lets it do it.
> > > >
> > > > Whether the compiler is permitted to do that depends heavily on what exactly
> > > > the code looks like, so it's hard to discuss this in the abstract.
> > > > If inside some function, *all* writes to a given location are atomic (I
> > > > think that's what you call WRITE_ONCE?), then the compiler is *not* allowed
> > > > to invent any new writes to that memory. The compiler has to assume that
> > > > there might be concurrent reads from other threads, whose behavior could
> > > > change from the extra compiler-introduced writes. The spec (in C, C++, and
> > > > Rust) already works like that.
> > > >
> > > > OTOH, the moment you do a single non-atomic write (i.e., a regular "*ptr =
> > > > val;" or memcpy or so), that is a signal to the compiler that there cannot
> > > > be any concurrent accesses happening at the moment, and therefore it can
> > > > (and likely will) introduce extra writes to that memory.
> > >
> > > Is that how it really works?
> > >
> > > I'd expect the atomic writes to have what we call "compiler barriers"
> > > before and after; IOW, the compiler can do whatever it wants with non
> >
> > If the atomic writes are relaxed, they shouldn't have "compiler
> > barriers" before or after, e.g. our kernel atomics don't have such
> > compiler barriers. And WRITE_ONCE() is basically relaxed atomic writes.
>
> Then perhaps we need a better definition of ATOMIC_RELAXED?
>
> I've always taken ATOMIC_RELAXED to mean "may be reordered with accesses
> to other memory locations". What you're describing seems likely to cause
You lost me on this one. if RELAXED means "reordering are allowed", then
why the compiler barriers implied from it?
> problems.
>
> e.g. if you allocate a struct, memset() it to zero it out, then publish
> it, then do a WRITE_ONCE()...
How do you publish it? If you mean:
// assume gp == NULL initially.
*x = 0;
smp_store_release(gp, x);
WRITE_ONCE(*x, 1);
and the other thread does
x = smp_load_acquire(gp);
if (p) {
r1 = READ_ONCE(*x);
}
r1 can be either 0 or 1.
What's the problem?
Regards,
Boqun
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-28 16:13 ` Boqun Feng
@ 2025-02-28 16:21 ` Kent Overstreet
2025-02-28 16:40 ` Boqun Feng
0 siblings, 1 reply; 194+ messages in thread
From: Kent Overstreet @ 2025-02-28 16:21 UTC (permalink / raw)
To: Boqun Feng
Cc: Ralf Jung, David Laight, Steven Rostedt, Linus Torvalds,
Martin Uecker, Paul E. McKenney, Alice Ryhl, Ventura Jack,
Gary Guo, airlied, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Fri, Feb 28, 2025 at 08:13:09AM -0800, Boqun Feng wrote:
> On Fri, Feb 28, 2025 at 11:04:28AM -0500, Kent Overstreet wrote:
> > On Fri, Feb 28, 2025 at 07:46:23AM -0800, Boqun Feng wrote:
> > > On Fri, Feb 28, 2025 at 10:41:12AM -0500, Kent Overstreet wrote:
> > > > On Fri, Feb 28, 2025 at 08:44:58AM +0100, Ralf Jung wrote:
> > > > > Hi,
> > > > >
> > > > > > > I guess you can sum this up to:
> > > > > > >
> > > > > > > The compiler should never assume it's safe to read a global more than the
> > > > > > > code specifies, but if the code reads a global more than once, it's fine
> > > > > > > to cache the multiple reads.
> > > > > > >
> > > > > > > Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
> > > > > > > And when I do use it, it is more to prevent write tearing as you mentioned.
> > > > > >
> > > > > > Except that (IIRC) it is actually valid for the compiler to write something
> > > > > > entirely unrelated to a memory location before writing the expected value.
> > > > > > (eg use it instead of stack for a register spill+reload.)
> > > > > > Not gcc doesn't do that - but the standard lets it do it.
> > > > >
> > > > > Whether the compiler is permitted to do that depends heavily on what exactly
> > > > > the code looks like, so it's hard to discuss this in the abstract.
> > > > > If inside some function, *all* writes to a given location are atomic (I
> > > > > think that's what you call WRITE_ONCE?), then the compiler is *not* allowed
> > > > > to invent any new writes to that memory. The compiler has to assume that
> > > > > there might be concurrent reads from other threads, whose behavior could
> > > > > change from the extra compiler-introduced writes. The spec (in C, C++, and
> > > > > Rust) already works like that.
> > > > >
> > > > > OTOH, the moment you do a single non-atomic write (i.e., a regular "*ptr =
> > > > > val;" or memcpy or so), that is a signal to the compiler that there cannot
> > > > > be any concurrent accesses happening at the moment, and therefore it can
> > > > > (and likely will) introduce extra writes to that memory.
> > > >
> > > > Is that how it really works?
> > > >
> > > > I'd expect the atomic writes to have what we call "compiler barriers"
> > > > before and after; IOW, the compiler can do whatever it wants with non
> > >
> > > If the atomic writes are relaxed, they shouldn't have "compiler
> > > barriers" before or after, e.g. our kernel atomics don't have such
> > > compiler barriers. And WRITE_ONCE() is basically relaxed atomic writes.
> >
> > Then perhaps we need a better definition of ATOMIC_RELAXED?
> >
> > I've always taken ATOMIC_RELAXED to mean "may be reordered with accesses
> > to other memory locations". What you're describing seems likely to cause
>
> You lost me on this one. if RELAXED means "reordering are allowed", then
> why the compiler barriers implied from it?
yes, compiler barrier is the wrong language here
> > e.g. if you allocate a struct, memset() it to zero it out, then publish
> > it, then do a WRITE_ONCE()...
>
> How do you publish it? If you mean:
>
> // assume gp == NULL initially.
>
> *x = 0;
> smp_store_release(gp, x);
>
> WRITE_ONCE(*x, 1);
>
> and the other thread does
>
> x = smp_load_acquire(gp);
> if (p) {
> r1 = READ_ONCE(*x);
> }
>
> r1 can be either 0 or 1.
So if the compiler does obey the store_release barrier, then we're ok.
IOW, that has to override the "compiler sees the non-atomic store as a
hint..." - but the thing is, since we're moving more to type system
described concurrency than helpers, I wonder if that will actually be
the case.
Also, what's the situation with reads? Can we end up in a situation
where a non-atomic read causes the compiler do erronious things with an
atomic_load(..., relaxed)?
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-28 16:21 ` Kent Overstreet
@ 2025-02-28 16:40 ` Boqun Feng
0 siblings, 0 replies; 194+ messages in thread
From: Boqun Feng @ 2025-02-28 16:40 UTC (permalink / raw)
To: Kent Overstreet
Cc: Ralf Jung, David Laight, Steven Rostedt, Linus Torvalds,
Martin Uecker, Paul E. McKenney, Alice Ryhl, Ventura Jack,
Gary Guo, airlied, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Fri, Feb 28, 2025 at 11:21:47AM -0500, Kent Overstreet wrote:
> On Fri, Feb 28, 2025 at 08:13:09AM -0800, Boqun Feng wrote:
> > On Fri, Feb 28, 2025 at 11:04:28AM -0500, Kent Overstreet wrote:
> > > On Fri, Feb 28, 2025 at 07:46:23AM -0800, Boqun Feng wrote:
> > > > On Fri, Feb 28, 2025 at 10:41:12AM -0500, Kent Overstreet wrote:
> > > > > On Fri, Feb 28, 2025 at 08:44:58AM +0100, Ralf Jung wrote:
> > > > > > Hi,
> > > > > >
> > > > > > > > I guess you can sum this up to:
> > > > > > > >
> > > > > > > > The compiler should never assume it's safe to read a global more than the
> > > > > > > > code specifies, but if the code reads a global more than once, it's fine
> > > > > > > > to cache the multiple reads.
> > > > > > > >
> > > > > > > > Same for writes, but I find WRITE_ONCE() used less often than READ_ONCE().
> > > > > > > > And when I do use it, it is more to prevent write tearing as you mentioned.
> > > > > > >
> > > > > > > Except that (IIRC) it is actually valid for the compiler to write something
> > > > > > > entirely unrelated to a memory location before writing the expected value.
> > > > > > > (eg use it instead of stack for a register spill+reload.)
> > > > > > > Not gcc doesn't do that - but the standard lets it do it.
> > > > > >
> > > > > > Whether the compiler is permitted to do that depends heavily on what exactly
> > > > > > the code looks like, so it's hard to discuss this in the abstract.
> > > > > > If inside some function, *all* writes to a given location are atomic (I
> > > > > > think that's what you call WRITE_ONCE?), then the compiler is *not* allowed
> > > > > > to invent any new writes to that memory. The compiler has to assume that
> > > > > > there might be concurrent reads from other threads, whose behavior could
> > > > > > change from the extra compiler-introduced writes. The spec (in C, C++, and
> > > > > > Rust) already works like that.
> > > > > >
> > > > > > OTOH, the moment you do a single non-atomic write (i.e., a regular "*ptr =
> > > > > > val;" or memcpy or so), that is a signal to the compiler that there cannot
> > > > > > be any concurrent accesses happening at the moment, and therefore it can
> > > > > > (and likely will) introduce extra writes to that memory.
> > > > >
> > > > > Is that how it really works?
> > > > >
> > > > > I'd expect the atomic writes to have what we call "compiler barriers"
> > > > > before and after; IOW, the compiler can do whatever it wants with non
> > > >
> > > > If the atomic writes are relaxed, they shouldn't have "compiler
> > > > barriers" before or after, e.g. our kernel atomics don't have such
> > > > compiler barriers. And WRITE_ONCE() is basically relaxed atomic writes.
> > >
> > > Then perhaps we need a better definition of ATOMIC_RELAXED?
> > >
> > > I've always taken ATOMIC_RELAXED to mean "may be reordered with accesses
> > > to other memory locations". What you're describing seems likely to cause
> >
> > You lost me on this one. if RELAXED means "reordering are allowed", then
> > why the compiler barriers implied from it?
>
> yes, compiler barrier is the wrong language here
>
> > > e.g. if you allocate a struct, memset() it to zero it out, then publish
> > > it, then do a WRITE_ONCE()...
> >
> > How do you publish it? If you mean:
> >
> > // assume gp == NULL initially.
> >
> > *x = 0;
> > smp_store_release(gp, x);
> >
> > WRITE_ONCE(*x, 1);
> >
> > and the other thread does
> >
> > x = smp_load_acquire(gp);
> > if (p) {
> > r1 = READ_ONCE(*x);
> > }
> >
> > r1 can be either 0 or 1.
>
> So if the compiler does obey the store_release barrier, then we're ok.
>
> IOW, that has to override the "compiler sees the non-atomic store as a
> hint..." - but the thing is, since we're moving more to type system
This might be a bad example, but I think that means if you add another
*x = 2 after WRITE_ONCE(*x, 1):
*x = 0;
smp_store_release(gp, x);
WRITE_ONCE(*x, 1);
*x = 2;
then compilers in-theory can do anything they seems fit. I.e. r1 can be
anything. Because it's a data race.
> described concurrency than helpers, I wonder if that will actually be
> the case.
>
> Also, what's the situation with reads? Can we end up in a situation
> where a non-atomic read causes the compiler do erronious things with an
> atomic_load(..., relaxed)?
For LKMM, no, because our data races requires at least one access
being write[1], this applies to both C and Rust. For Rust native memory
model, no, because Ralf fixed it:
https://github.com/rust-lang/rust/pull/128778
[1]: "PLAIN ACCESSES AND DATA RACES" in tools/memory-model/Documentation/explanation.txt
Regards,
Boqun
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 19:45 ` Ralf Jung
2025-02-27 20:22 ` Kent Overstreet
@ 2025-02-28 20:41 ` Ventura Jack
2025-02-28 22:13 ` Geoffrey Thomas
2025-03-04 18:24 ` Ralf Jung
1 sibling, 2 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-28 20:41 UTC (permalink / raw)
To: Ralf Jung
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Thu, Feb 27, 2025 at 12:45 PM Ralf Jung <post@ralfj.de> wrote:
>
> Hi,
>
> >>> If C was willing to break code as much as Rust, it would be easier to
> >>> clean up C.
> >>
> >> Is that true? Gcc updates do break code.
> >
> > Surely not as much as Rust, right? From what I hear from users
> > of Rust and of C, some Rust developers complain about
> > Rust breaking a lot and being unstable, while I instead
> > hear complaints about C and C++ being unwilling to break
> > compatibility.
>
> Stable Rust code hardly ever breaks on a compiler update. I don't know which
> users you are talking about here, and it's hard to reply anything concrete to
> such a vague claim that you are making here. I also "hear" lots of things, but
> we shouldn't treat hear-say as facts.
> *Nightly* Rust features do break regularly, but nobody has any right to complain
> about that -- nightly Rust is the playground for experimenting with features
> that we know are no ready yet.
I did give the example of the time crate. Do you not consider
that a very significant example of breakage? Surely, with
as public and large an example of breakage as the time crate,
there clearly is something.
I will acknowledge that Rust editions specifically do not
count as breaking code, though the editions feature,
while interesting, does have some drawbacks.
The time crate breakage is large from what I can tell. When I
skim through GitHub issues in different projects,
it apparently cost some people significant time and pain.
https://github.com/NixOS/nixpkgs/issues/332957#issue-2453023525
"Sorry for the inconvenience. I've lost a lot of the last
week to coordinating the update, collecting broken
packages, etc., but hopefully by spreading out the
work from here it won't take too much of anybody
else's time."
https://github.com/NixOS/nixpkgs/issues/332957#issuecomment-2274824965
"On principle, rust 1.80 is a new language due
to the incompatible change (however inadvertent),
and should be treated as such. So I think we need
to leave 1.79 in nixpkgs, a little while longer. We can,
however, disable its hydra builds, such that
downstream will learn about the issue through
increased build times and have a chance to step up,
before their toys break."
Maybe NixOS was hit harder than others.
If you look at.
https://github.com/rust-lang/rust/issues/127343#issuecomment-2218261296
It has 56 thumbs down.
Some Reddit threads about the time crate breakage.
https://www.reddit.com/r/programming/comments/1ets4n2/type_inference_breakage_in_rust_180_has_not_been/
"That response reeks of "rules for thee, but
not for me" ... a bad look for project that wants
to be taken seriously."
https://www.reddit.com/r/rust/comments/1f88s0h/has_rust_180_broken_anyone_elses_builds/
"I'm fine with the Rust project making the call that
breakage is fine in this case, but I wish they would
then stop using guaranteed backwards compatibility
as such a prominent selling point. One of the most
advertised features of Rust is that code that builds
on any version will build on any future version
(modulo bugfixes). Which is simply not true (and
this is not the only case of things being deemed
acceptable breakage)."
Some of the users there do complain about Rust breaking.
Though others claim that since Rust 1.0, Rust breaks very
rarely. One comment points out that Rust is allowed to
break backwards compatibility in a few cases,
according to its pledge, such as type inference changes.
This does not refer to Rust editions, since those are
clearly defined to have language changes, and have automated
tools for conversion, and Rust projects compile against
the Rust edition specified by the project independent
of compiler version.
rustc/Rust does have change logs.
https://releases.rs/
and each of the releases have a "Compatibility Notes"
section, and in many of the GitHub issues, crater is
run on a lot of projects to see how many Rust libraries,
if any, are broken by the changes. Though, for bug fixes
and fixing holes in the type system, such breakage
I agree with is necessary even if unfortunate.
> > Rust does admittedly a lot of the time have tools to
> > mitigate it, but Rust sometimes go beyond that.
> > C code from 20 years ago can often be compiled
> > without modification on a new compiler, that is a common
> > experience I hear about. While I do not know if that
> > would hold true for Rust code. Though Rust has editions.
>
> Well, it is true that Rust code from 20 years ago cannot be compiled on today's
> compiler any more. ;) But please do not spread FUD, and instead stick to
> verifiable claims or cite some reasonable sources.
Sorry, but I did not spread FUD, please do not accuse
me of doing so when I did not do that. I did give an
example with the time crate, and I did give a source
regarding the time crate. And you yourself acknowledge
my example with the time crate as being a very significant
one.
> > The time crate breaking example above does not
> > seem nice.
>
> The time issue is like the biggest such issue we had ever, and indeed that did
> not go well. We should have given the ecosystem more time to update to newer
> versions of the time crate, which would have largely mitigated the impact of
> this. A mistake was made, and a *lot* of internal discussion followed to
> minimize the chance of this happening again. I hope you don't take that accident
> as being representative of regular Rust development.
Was it an accident? I thought the breakage was intentional,
and in line with Rust's guarantees on backwards
compatibility, since it was related to type inference,
and Rust is allowed to do breaking changes for that
according to its guarantees as I understand it.
Or do you mean that it was an accident that better
mitigation was not done in advance, like you describe
with giving the ecosystem more time to update?
>
Another concern I have is with Rust editions. It is
a well defined way of having language "versions",
and it does have automated conversion tools,
and Rust libraries choose themselves which
edition of Rust that they are using, independent
of the version of the compiler.
However, there are still some significant changes
to the language between editions, and that means
that to determine the correctness of Rust code, you
must know which edition it is written for.
For instance, does this code have a deadlock?
fn f(value: &RwLock<Option<bool>>) {
if let Some(x) = *value.read().unwrap() {
println!("value is {x}");
} else {
let mut v = value.write().unwrap();
if v.is_none() {
*v = Some(true);
}
}
}
The answer is that it depends on whether it is
interpreted as being in Rust edition 2021 or
Rust edition 2024. This is not as such an
issue for upgrading, since there are automated
conversion tools. But having semantic
changes like this means that programmers must
be aware of the edition that code is written in, and
when applicable, know the different semantics of
multiple editions. Rust editions are published every 3
years, containing new semantic changes typically.
There are editions Rust 2015, Rust 2018, Rust 2021,
Rust 2024.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 23:18 ` Kent Overstreet
2025-02-28 7:38 ` Ralf Jung
@ 2025-02-28 20:48 ` Ventura Jack
1 sibling, 0 replies; 194+ messages in thread
From: Ventura Jack @ 2025-02-28 20:48 UTC (permalink / raw)
To: Kent Overstreet
Cc: David Laight, Ralf Jung, Miguel Ojeda, Gary Guo, torvalds,
airlied, boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
rust-for-linux
On Thu, Feb 27, 2025 at 4:18 PM Kent Overstreet
<kent.overstreet@linux.dev> wrote:
>
>
> Those were big breaks.
>
> In rust there's only ever little, teeny tiny breaks to address soundness
> issues, and they've been pretty small and localized.
>
> If it did ever came up the kernel would be patched to fix in advance
> whatever behaviour the compiler is being changed to fix (and that'd get
> backported to stable trees as well, if necessary).
>
> It's not likely to ever come up since we're not using stdlib, and they
> won't want to break behaviour for us if at all possible.
A minor correction as I understand it; Rust is also allowed
to break for type inference changes, as was the case with the
time crate breakage, according to its backwards compatibility
guarantees. Though that hopefully rarely causes as big
problems as it did with the time crate breakage.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-27 21:33 ` Steven Rostedt
@ 2025-02-28 21:29 ` Paul E. McKenney
0 siblings, 0 replies; 194+ messages in thread
From: Paul E. McKenney @ 2025-02-28 21:29 UTC (permalink / raw)
To: Steven Rostedt
Cc: David Laight, Linus Torvalds, Martin Uecker, Ralf Jung,
Alice Ryhl, Ventura Jack, Kent Overstreet, Gary Guo, airlied,
boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
On Thu, Feb 27, 2025 at 04:33:19PM -0500, Steven Rostedt wrote:
> On Thu, 27 Feb 2025 20:47:22 +0000
> David Laight <david.laight.linux@gmail.com> wrote:
>
> > Except that (IIRC) it is actually valid for the compiler to write something
> > entirely unrelated to a memory location before writing the expected value.
> > (eg use it instead of stack for a register spill+reload.)
> > Not gcc doesn't do that - but the standard lets it do it.
>
> I call that a bug in the specification ;-)
Please feel free to write a working paper to get it changed. ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-28 20:41 ` Ventura Jack
@ 2025-02-28 22:13 ` Geoffrey Thomas
2025-03-01 14:19 ` Ventura Jack
2025-03-04 18:24 ` Ralf Jung
1 sibling, 1 reply; 194+ messages in thread
From: Geoffrey Thomas @ 2025-02-28 22:13 UTC (permalink / raw)
To: Ventura Jack
Cc: Ralf Jung, Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds,
airlied, boqun.feng, david.laight.linux, ej, gregkh, hch, hpa,
ksummit, linux-kernel, rust-for-linux
On Fri, Feb 28, 2025, at 3:41 PM, Ventura Jack wrote:
>
> I did give the example of the time crate. Do you not consider
> that a very significant example of breakage? Surely, with
> as public and large an example of breakage as the time crate,
> there clearly is something.
>
> I will acknowledge that Rust editions specifically do not
> count as breaking code, though the editions feature,
> while interesting, does have some drawbacks.
>
> The time crate breakage is large from what I can tell. When I
> skim through GitHub issues in different projects,
> it apparently cost some people significant time and pain.
>
> https://github.com/NixOS/nixpkgs/issues/332957#issue-2453023525
> "Sorry for the inconvenience. I've lost a lot of the last
> week to coordinating the update, collecting broken
> packages, etc., but hopefully by spreading out the
> work from here it won't take too much of anybody
> else's time."
>
> https://github.com/NixOS/nixpkgs/issues/332957#issuecomment-2274824965
> "On principle, rust 1.80 is a new language due
> to the incompatible change (however inadvertent),
> and should be treated as such. So I think we need
> to leave 1.79 in nixpkgs, a little while longer. We can,
> however, disable its hydra builds, such that
> downstream will learn about the issue through
> increased build times and have a chance to step up,
> before their toys break."
There's two things about this specific change that I think are relevant
to a discussion about Rust in the Linux kernel that I don't think got
mentioned (apologies if they did and I missed it in this long thread).
First, the actual change was not in the Rust language; it was in the
standard library, in the alloc crate, which implemented an additional
conversion for standard library types (which is why existing code became
ambiguous). Before v6.10, the kernel had an in-tree copy/fork of the
alloc crate, and would have been entirely immune from this change. If
someone synced the in-tree copy of alloc and noticed the problem, they
could have commented out the new conversions, and the actual newer rustc
binary would have continued to compile the old kernel code.
To be clear, I do think it's good that the kernel no longer has a copy
of the Rust standard library code, and I'm not advocating going back to
the copy. But if we're comparing the willingness of languages to break
backwards compatibility in a new version, this is much more analogous to
C or C++ shipping a new function in the standard library whose name
conflicts with something the kernel is already using, not to a change in
the language semantics. My understanding is that this happened several
times when C and C++ were younger (and as a result there are now rules
about things like leading underscores, which language users seem not to
be universally aware of, and other changes are now relegated to standard
version changes).
Of course, we don't use the userspace C standard library in the kernel.
But a good part of the goal in using Rust is to work with a more
expressive language than C and in turn to reuse things that have already
been well expressed in its standard library, whereas there's much less
in the C standard library that would be prohibitive to reimplement
inside the kernel (and there's often interest in doing it differently
anyway, e.g., strscpy). I imagine that if we were to use, say, C++,
there will be similar considerations about adopting smart pointer
implementations from a good userspace libstdc++. If we were to use
Objective-C we probably wouldn't write our own -lobjc runtime from
scratch, and so forth. So, by using a more expressive language than C,
we're asking that language to supply code that otherwise would have been
covered by the kernel-internal no-stable-API rule, and we're making an
expectation of API stability for it, which is a stronger demand than we
currently make of C.
Which brings me to the second point: the reason this was painful for,
e.g., NixOS is that they own approximately none of the code that was
affected. They're a redistributor of code that other people have written
and packaged, with Cargo.toml and Cargo.lock files specifying specific
versions of crates that recursively eventually list some specific
version of the time crate. If there's something that needs to be fixed
in the time crate, every single Cargo.toml file that has a version bound
that excludes the fixed version of the time crate needs to be fixed.
Ideally, NixOS wouldn't carry this patch locally, which means they're
waiting on an upstream release of the crates that depend on the time
crate. This, then, recursively brings the problem to the crates that
depend on the crates that depend on the time crate, until you have
recursively either upgraded your versions of everything in the ecosystem
or applied distribution-specific patches. That recursive dependency walk
with volunteer FOSS maintainers in the loop at each step is painful.
There is nothing analogous in the kernel. Because of the no-stable-API
rule, nobody will find themselves needing to make a release of one
subsystem, then upgrading another subsystem to depend on that release,
then upgrading yet another subsystem in turn. They won't even need
downstream subsystem maintainers to approve any patch. They'll just make
the change in the file that needs the change and commit it. So, while a
repeat of this situation would still be visible to the kernel as a break
in backwards compatibility, the actual response to the situation would
be thousands of times less painful: apply the one-line fix to the spot
in the kernel that needs it, and then say, "If you're using Rust 1.xxx
or newer, you need kernel 6.yyy or newer or you need to cherry-pick this
patch." (You'd probably just cc -stable on the commit.) And then you're
done; there's nothing else you need to do.
There are analogously painful experiences with C/C++ compiler upgrades
if you are in the position of redistributing other people's code, as
anyone who has tried to upgrade GCC in a corporate environment with
vendored third-party libraries knows. A well-documented public example
of this is what happened when GCC dropped support for things like
implicit int: old ./configure scripts would silently fail feature
detection for features that did exist, and distributions like Fedora
would need to double-check the ./configure results and decide whether to
upgrade the library (potentially triggering downstream upgrades) or
carry a local patch. See the _multi-year_ effort around
https://fedoraproject.org/wiki/Changes/PortingToModernC
https://news.ycombinator.com/item?id=39429627
Within the Linux kernel, this class of pain doesn't arise: we aren't
using other people's packaging or other people's ./configure scripts.
We're using our own code (or we've decided we're okay acting as if we
authored any third-party code we vendor), and we have one build system
and one version of what's in the kernel tree.
So - without denying that this was a compatibility break in a way that
didn't live up to a natural reading of Rust's compatibility promise, and
without denying that for many communities other than the kernel it was a
huge pain, I think the implications for Rust in the kernel are limited.
> Another concern I have is with Rust editions. It is
> a well defined way of having language "versions",
> and it does have automated conversion tools,
> and Rust libraries choose themselves which
> edition of Rust that they are using, independent
> of the version of the compiler.
>
> However, there are still some significant changes
> to the language between editions, and that means
> that to determine the correctness of Rust code, you
> must know which edition it is written for.
>
> For instance, does this code have a deadlock?
>
> fn f(value: &RwLock<Option<bool>>) {
> if let Some(x) = *value.read().unwrap() {
> println!("value is {x}");
> } else {
> let mut v = value.write().unwrap();
> if v.is_none() {
> *v = Some(true);
> }
> }
> }
>
> The answer is that it depends on whether it is
> interpreted as being in Rust edition 2021 or
> Rust edition 2024. This is not as such an
> issue for upgrading, since there are automated
> conversion tools. But having semantic
> changes like this means that programmers must
> be aware of the edition that code is written in, and
> when applicable, know the different semantics of
> multiple editions. Rust editions are published every 3
> years, containing new semantic changes typically.
This doesn't seem particularly different from C (or C++) language
standard versions. The following code compiles successfully yet behaves
differently under --std=c23 and --std=c17 or older:
int x(void) {
auto n = 1.5;
return n * 2;
}
(inspired by https://stackoverflow.com/a/77383671/23392774)
--
Geoffrey Thomas
geofft@ldpreload.com
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-28 22:13 ` Geoffrey Thomas
@ 2025-03-01 14:19 ` Ventura Jack
0 siblings, 0 replies; 194+ messages in thread
From: Ventura Jack @ 2025-03-01 14:19 UTC (permalink / raw)
To: Geoffrey Thomas
Cc: Ralf Jung, Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds,
airlied, boqun.feng, david.laight.linux, ej, gregkh, hch, hpa,
ksummit, linux-kernel, rust-for-linux
On Fri, Feb 28, 2025 at 3:14 PM Geoffrey Thomas <geofft@ldpreload.com> wrote:
>
> On Fri, Feb 28, 2025, at 3:41 PM, Ventura Jack wrote:
> >
> > I did give the example of the time crate. Do you not consider
> > that a very significant example of breakage? Surely, with
> > as public and large an example of breakage as the time crate,
> > there clearly is something.
> >
> > I will acknowledge that Rust editions specifically do not
> > count as breaking code, though the editions feature,
> > while interesting, does have some drawbacks.
> >
> > The time crate breakage is large from what I can tell. When I
> > skim through GitHub issues in different projects,
> > it apparently cost some people significant time and pain.
> >
> > https://github.com/NixOS/nixpkgs/issues/332957#issue-2453023525
> > "Sorry for the inconvenience. I've lost a lot of the last
> > week to coordinating the update, collecting broken
> > packages, etc., but hopefully by spreading out the
> > work from here it won't take too much of anybody
> > else's time."
> >
> > https://github.com/NixOS/nixpkgs/issues/332957#issuecomment-2274824965
> > "On principle, rust 1.80 is a new language due
> > to the incompatible change (however inadvertent),
> > and should be treated as such. So I think we need
> > to leave 1.79 in nixpkgs, a little while longer. We can,
> > however, disable its hydra builds, such that
> > downstream will learn about the issue through
> > increased build times and have a chance to step up,
> > before their toys break."
>
> There's two things about this specific change that I think are relevant
> to a discussion about Rust in the Linux kernel that I don't think got
> mentioned (apologies if they did and I missed it in this long thread).
>
> First, the actual change was not in the Rust language; it was in the
> standard library, in the alloc crate, which implemented an additional
> conversion for standard library types (which is why existing code became
> ambiguous). Before v6.10, the kernel had an in-tree copy/fork of the
> alloc crate, and would have been entirely immune from this change. If
> someone synced the in-tree copy of alloc and noticed the problem, they
> could have commented out the new conversions, and the actual newer rustc
> binary would have continued to compile the old kernel code.
>
> To be clear, I do think it's good that the kernel no longer has a copy
> of the Rust standard library code, and I'm not advocating going back to
> the copy. But if we're comparing the willingness of languages to break
> backwards compatibility in a new version, this is much more analogous to
> C or C++ shipping a new function in the standard library whose name
> conflicts with something the kernel is already using, not to a change in
> the language semantics. My understanding is that this happened several
> times when C and C++ were younger (and as a result there are now rules
> about things like leading underscores, which language users seem not to
> be universally aware of, and other changes are now relegated to standard
> version changes).
>[Omitted] But if we're comparing the willingness of languages to break
> backwards compatibility in a new version, this is much more analogous to
> C or C++ shipping a new function in the standard library whose name
> conflicts with something the kernel is already using, not to a change in
> the language semantics. [Omitted]
I am not sure that this would make sense for C++, since C++
has namespaces, and thus shipping a new function should
not be an issue, I believe. For C++, I suspect it would be more
analogous to for instance adding an extra implicit conversion
of some kind, since that would fit more with changed type
inference. Has C++ done such a thing?
However, for both C and C++, the languages and standard
libraries release much less often, at least officially. And the
languages and standard libraries do not normally change
with a compiler update, or are not normally meant to. For
Rust, I suppose the lines are currently more blurred
between the sole major Rust compiler rustc, the Rust
language, and the Rust standard library, when rustc has a new
release. Some users complained that this kind of change
that affected the Rust time crate and others, should have
been put in a new Rust edition. The 1.80 was a relatively
minor rustc compiler release, not a Rust language edition
release.
Different for Rust in that it was a minor compiler release that
broke a lot, not even a new Rust edition. And also different in that
it broke what did and did not compile from what I can tell.
And Rust has long ago reached 1.0.
I wonder if this situation would still have been able to happen
if gccrs was production ready Would projects just have been able
to swith to gccrs instead? Or more easily stay on an older
release/version of rustc? I am not sure how it would all pan out.
I do dislike it a lot if C has added functions that could cause
name collisions, especially after C matured. Though I
assume that these name collisions these days at
most happen in new releases/standard versions of
the C language and library, not in compiler versions. C could
have avoided all that with features like C++ namespaces or
Rust modules/crates, but C is intentionally kept simple.
C's simplicity has various trade-offs.
> Which brings me to the second point: the reason this was painful for,
> e.g., NixOS is that they own approximately none of the code that was
> affected. They're a redistributor of code that other people have written
> and packaged, with Cargo.toml and Cargo.lock files specifying specific
> versions of crates that recursively eventually list some specific
> version of the time crate. If there's something that needs to be fixed
> in the time crate, every single Cargo.toml file that has a version bound
> that excludes the fixed version of the time crate needs to be fixed.
> Ideally, NixOS wouldn't carry this patch locally, which means they're
> waiting on an upstream release of the crates that depend on the time
> crate. This, then, recursively brings the problem to the crates that
> depend on the crates that depend on the time crate, until you have
> recursively either upgraded your versions of everything in the ecosystem
> or applied distribution-specific patches. That recursive dependency walk
> with volunteer FOSS maintainers in the loop at each step is painful.
>
> There is nothing analogous in the kernel. Because of the no-stable-API
> rule, nobody will find themselves needing to make a release of one
> subsystem, then upgrading another subsystem to depend on that release,
> then upgrading yet another subsystem in turn. They won't even need
> downstream subsystem maintainers to approve any patch. They'll just make
> the change in the file that needs the change and commit it. So, while a
> repeat of this situation would still be visible to the kernel as a break
> in backwards compatibility, the actual response to the situation would
> be thousands of times less painful: apply the one-line fix to the spot
> in the kernel that needs it, and then say, "If you're using Rust 1.xxx
> or newer, you need kernel 6.yyy or newer or you need to cherry-pick this
> patch." (You'd probably just cc -stable on the commit.) And then you're
> done; there's nothing else you need to do.
My pondering in
>> Maybe NixOS was hit harder than others.
must have been accurate then. Though some others were
hit as well, presumably typically significantly less hard than NixOS.
> There are analogously painful experiences with C/C++ compiler upgrades
> if you are in the position of redistributing other people's code, as
> anyone who has tried to upgrade GCC in a corporate environment with
> vendored third-party libraries knows. A well-documented public example
> of this is what happened when GCC dropped support for things like
> implicit int: old ./configure scripts would silently fail feature
> detection for features that did exist, and distributions like Fedora
> would need to double-check the ./configure results and decide whether to
> upgrade the library (potentially triggering downstream upgrades) or
> carry a local patch. See the _multi-year_ effort around
> https://fedoraproject.org/wiki/Changes/PortingToModernC
> https://news.ycombinator.com/item?id=39429627
Is this for a compiler version upgrade, or for a new language and
standard library release? The former happens much more often for C
than the latter.
Implicit int was not a nice feature, but its removal was also
not nice for backwards compatibility, I definitely agree about that.
But are you sure that it was entirely silent? When I run it in Godbolt
with different versions of GCC, a warning is given for many
older versions of GCC if implicit int is used. And in newer
versions, in at least some cases, a compile time error is given.
Implicit int was removed in C99, and GCC allowed it with a warning
for many years after 1999, as far as I can see.
If for many years, or multiple decades (maybe 1999 to 2022), a
warning was given, that does mitigate it a bit. But I agree
it is not nice. I suppose this is where Rust editions could help
a lot. But Rust editions are used much more frequently, much
more extensively and for much deeper changes (including
semantic changes) than this as far as I can figure out. A
Rust editions style feature, but with way more careful
and limited usage, might have been nice for the C language,
and other languages. Then again, Rust's experiment with
Rust editions, and also how Rust uses its editions feature, is
interesting, experimental and novel as far as I can figure out.
> Within the Linux kernel, this class of pain doesn't arise: we aren't
> using other people's packaging or other people's ./configure scripts.
> We're using our own code (or we've decided we're okay acting as if we
> authored any third-party code we vendor), and we have one build system
> and one version of what's in the kernel tree.
>
> So - without denying that this was a compatibility break in a way that
> didn't live up to a natural reading of Rust's compatibility promise, and
> without denying that for many communities other than the kernel it was a
> huge pain, I think the implications for Rust in the kernel are limited.
In this specific case. But does the backwards compatibility
guarantees for the Rust language that allows type inference
changes, only apply to the Rust standard library, or also
to the language?
And there are multiple parts of the Rust
standard library, "core", "alloc", "std". Can the changes
happen to the parts of the Rust standard library that
everyone necessarily uses as I understand it? On the
other hand, I would assume that will not happen, "core"
is small and fundamental as I understand it.
And it did happen with a rustc release, not a new Rust
edition.
> > Another concern I have is with Rust editions. It is
> > a well defined way of having language "versions",
> > and it does have automated conversion tools,
> > and Rust libraries choose themselves which
> > edition of Rust that they are using, independent
> > of the version of the compiler.
> >
> > However, there are still some significant changes
> > to the language between editions, and that means
> > that to determine the correctness of Rust code, you
> > must know which edition it is written for.
> >
> > For instance, does this code have a deadlock?
> >
> > fn f(value: &RwLock<Option<bool>>) {
> > if let Some(x) = *value.read().unwrap() {
> > println!("value is {x}");
> > } else {
> > let mut v = value.write().unwrap();
> > if v.is_none() {
> > *v = Some(true);
> > }
> > }
> > }
> >
> > The answer is that it depends on whether it is
> > interpreted as being in Rust edition 2021 or
> > Rust edition 2024. This is not as such an
> > issue for upgrading, since there are automated
> > conversion tools. But having semantic
> > changes like this means that programmers must
> > be aware of the edition that code is written in, and
> > when applicable, know the different semantics of
> > multiple editions. Rust editions are published every 3
> > years, containing new semantic changes typically.
>
> This doesn't seem particularly different from C (or C++) language
> standard versions. The following code compiles successfully yet behaves
> differently under --std=c23 and --std=c17 or older:
>
> int x(void) {
> auto n = 1.5;
> return n * 2;
> }
>
> (inspired by https://stackoverflow.com/a/77383671/23392774)
>
I disagree with you 100% here regarding your example.
First off, your example does not compile like you claim it does
when I try it.
#include "stdio.h"
int x(void) {
auto n = 1.5;
return n * 2;
}
int main() {
printf("%d", x());
return 0;
}
When I run it with GCC 14.2 --std=c17, or Clang 19.1.0 --std=c17,
I get compile-time errors, complaining about implicit int.
Why did you claim that it would compile successfully?
When I run it with GCC 5.1 or Clang 3.5, I get compile-time
warnings instead about implicit int. Only with --std=c23
does it compile and run.
Like, that example must have either given warnings or compile-time
errors for decades.
Second off, this appears to be a combination of two changes,
implicit int and storage-class specifier/type inference dual
meaning of `auto`.
- "Implicit int", removed in C99, compile-time warning in GCC
from perhaps 1999 to 2022, gives a compile-time error
from perhaps 2022.
- `auto` keyword in C, used originally as a storage-class
specifier, like in `auto double x`. Since `auto` is typically the
default storage-class for the cases where it can apply,
as I understand it, it was probably almost never used in
practice. In C23, they decided to reuse it for type inference
as well. C23 keeps it as a storage-class specifier. The reason
for reusing it here is probably due to the desire to avoid
collisions and to keep as much backwards compatibility
as possible, and because there were few keywords to use.
And to be more consistent with C++.
- C++ might never have allowed implicit int, I am not sure.
C++ did use the `auto` keyword as a storage-class specifier,
but removed it for that purpose in C++11, and made its
meaning to be type inference instead. But before C++11,
`auto n = 1.5` was not allowed, since implicit int was
not allowed in C++, possibly never allowed.
Even though there are probably very few programs out there
that use or used `auto` as a storage-class specifier for either
C or C++, I do dislike this change in some ways, since it could
as you say change language semantics. The combination in
your example is rare, however, and there might have been
decades of compile-time warnings or errors between. I do
not know whether it occurred in practice, since using `auto`
as a storage-class specifier must have been very rare, and
when used, the proper usage would have been more akin to
`auto int x` or `auto float x`.
And with decades of compile-time warnings, and removal from
the language for decades, this example you give here honestly
seems like an example against your points, not for your points.
I do dislike this kind of keyword reusage, even when done
very carefully, since it could lead to trouble. For C and C++,
they are heavily constrained in what they can do here,
while Rust has the option of Rust editions. But Rust editions
are used for much less careful and much deeper changes
like above, where the same code in one edition causes a
deadlock, in another does not cause a deadlock and runs.
fn f(value: &RwLock<Option<bool>>) {
if let Some(x) = *value.read().unwrap() {
println!("value is {x}");
} else {
let mut v = value.write().unwrap();
if v.is_none() {
*v = Some(true);
}
}
}
For the specific example.
https://doc.rust-lang.org/edition-guide/rust-2024/temporary-if-let-scope.html
How to handle the issue of keywords, from the perspective of
programming language design? In C and C++,
the approach appears to be, to be very careful. In Rust,
there is Rust editions, which I honestly believe can be a
good approach if used in a minimal way, maybe like rare, tiny
changes that do not change semantics, like every 20 years. Rust
on the other hand uses Rust editions to make more frequent
(every 3 years) and much deeper changes, and to semantics.
The usage that Rust has with its editions feature reminds me
more of an experimental research language, or like Scala.
On the other hand, maybe I am wrong, and it is fine for Rust
to use its editions like this. But I am very wary of it, and it seems
experimental to me. Then there are other programming
language design approaches as well, like giving keywords their
own syntactic namespace, but that can only be done when
designing a new language.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 17:47 ` Steven Rostedt
2025-02-26 22:07 ` Josh Poimboeuf
@ 2025-03-02 12:19 ` David Laight
1 sibling, 0 replies; 194+ messages in thread
From: David Laight @ 2025-03-02 12:19 UTC (permalink / raw)
To: Steven Rostedt
Cc: Kent Overstreet, James Bottomley, Greg KH, Miguel Ojeda,
Ventura Jack, H. Peter Anvin, Alice Ryhl, Linus Torvalds,
Gary Guo, airlied, boqun.feng, hch, ksummit, linux-kernel,
rust-for-linux, Ralf Jung, Josh Poimboeuf
On Wed, 26 Feb 2025 12:47:33 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> On Wed, 26 Feb 2025 12:41:30 -0500
> Kent Overstreet <kent.overstreet@linux.dev> wrote:
>
> > It's been awhile since I've looked at one, I've been just automatically
> > switching back to frame pointers for awhile, but - I never saw
> > inaccurate backtraces, just failure to generate a backtrace - if memory
> > serves.
>
> OK, maybe if the bug was bad enough, it couldn't get access to the ORC
> tables for some reason. Not having a backtrace on crash is not as bad as
> incorrect back traces, as the former is happening when the system is dieing
> and live kernel patching doesn't help with that.
I bet to differ.
With no backtrace you have absolutely no idea what happened.
A list of 'code addresses on the stack' (named as such) can be enough
to determine the call sequence.
Although to be really helpful you need a hexdump of the actual stack
and the stack addresses of each 'code address'.
David
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-28 15:41 ` Kent Overstreet
2025-02-28 15:46 ` Boqun Feng
@ 2025-03-04 18:12 ` Ralf Jung
1 sibling, 0 replies; 194+ messages in thread
From: Ralf Jung @ 2025-03-04 18:12 UTC (permalink / raw)
To: Kent Overstreet
Cc: David Laight, Steven Rostedt, Linus Torvalds, Martin Uecker,
Paul E. McKenney, Alice Ryhl, Ventura Jack, Gary Guo, airlied,
boqun.feng, ej, gregkh, hch, hpa, ksummit, linux-kernel,
miguel.ojeda.sandonis, rust-for-linux
Hi all,
>> Whether the compiler is permitted to do that depends heavily on what exactly
>> the code looks like, so it's hard to discuss this in the abstract.
>> If inside some function, *all* writes to a given location are atomic (I
>> think that's what you call WRITE_ONCE?), then the compiler is *not* allowed
>> to invent any new writes to that memory. The compiler has to assume that
>> there might be concurrent reads from other threads, whose behavior could
>> change from the extra compiler-introduced writes. The spec (in C, C++, and
>> Rust) already works like that.
>>
>> OTOH, the moment you do a single non-atomic write (i.e., a regular "*ptr =
>> val;" or memcpy or so), that is a signal to the compiler that there cannot
>> be any concurrent accesses happening at the moment, and therefore it can
>> (and likely will) introduce extra writes to that memory.
>
> Is that how it really works?
>
> I'd expect the atomic writes to have what we call "compiler barriers"
> before and after; IOW, the compiler can do whatever it wants with non
> atomic writes, provided it doesn't cross those barriers.
If you do a non-atomic write, and then an atomic release write, that release
write marks communication with another thread. When I said "concurrent accesses
[...] at the moment" above, the details of what exactly that means matter a lot:
by doing an atomic release write, the "moment" has passed, as now other threads
could be observing what happened.
One can get quite far thinking about these things in terms of "barriers" that
block the compiler from reordering operations, but that is not actually what
happens. The underlying model is based on describing the set of behaviors that a
program can have when using particular atomicity orderings (such as release,
acquire, relaxed); the compiler is responsible for ensuring that the resulting
program only exhibits those behaviors. An approach based on "barriers" is one,
but not the only, approach to achieve that: at least in special cases, compilers
can and do perform more optimizations. The only thing that matters is that the
resulting program still behaves as-if it was executed according to the rules of
the language, i.e., the program execution must be captured by the set of
behaviors that the atomicity memory model permits. This set of behaviors is,
btw, completely portable; this is truly an abstract semantics and not tied to
what any particular hardware does.
Now, that's the case for general C++ or Rust. The Linux kernel is special in
that its concurrency support predates the official model, so it is written in a
different style, commonly referred to as LKMM. I'm not aware of a formal study
of that model to the same level of rigor as the C++ model, so for me as a
theoretician it is much harder to properly understand what happens there,
unfortunately. My understanding is that many LKMM operations can be mapped to
equivalent C++ operations (i.e., WRITE_ONCE and READ_ONCE correspond to atomic
relaxed loads and stores). However, the LKMM also makes use of dependencies
(address and/or data dependencies? I am not sure), and unfortunately those
fundamentally clash with even basic compiler optimizations such as GVN/CSE or
algebraic simplifications, so it's not at all clear how they can even be used in
an optimizing compiler in a formally sound way (i.e., "we could, in principle,
mathematically prove that this is correct"). Finding a rigorous way to equip an
optimized language such as C, C++, or Rust with concurrency primitives that emit
the same efficient assembly code as what the LKMM can produce is, I think, an
open problem. Meanwhile, the LKMM seems to work in practice despite those
concerns, and that should apply to both C (when compiled with clang) and Rust in
the same way -- but when things go wrong, the lack of a rigorous contract will
make it harder to determine whether the bug is in the compiler or the kernel.
But again, Rust should behave exactly like clang here, so this should not be a
new concern. :)
Kind regards,
Ralf
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-28 20:41 ` Ventura Jack
2025-02-28 22:13 ` Geoffrey Thomas
@ 2025-03-04 18:24 ` Ralf Jung
2025-03-06 18:49 ` Ventura Jack
1 sibling, 1 reply; 194+ messages in thread
From: Ralf Jung @ 2025-03-04 18:24 UTC (permalink / raw)
To: Ventura Jack
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
Hi all,
>>> The time crate breaking example above does not
>>> seem nice.
>>
>> The time issue is like the biggest such issue we had ever, and indeed that did
>> not go well. We should have given the ecosystem more time to update to newer
>> versions of the time crate, which would have largely mitigated the impact of
>> this. A mistake was made, and a *lot* of internal discussion followed to
>> minimize the chance of this happening again. I hope you don't take that accident
>> as being representative of regular Rust development.
>
> Was it an accident? I thought the breakage was intentional,
> and in line with Rust's guarantees on backwards
> compatibility, since it was related to type inference,
> and Rust is allowed to do breaking changes for that
> according to its guarantees as I understand it.
> Or do you mean that it was an accident that better
> mitigation was not done in advance, like you describe
> with giving the ecosystem more time to update?
It was an accident. We have an established process for making such changes while
keeping the ecosystem impact to a minimum, but mistakes were made and so the
ecosystem impact was beyond what we'd be willing to accept.
The key to understand here that there's a big difference between "we do a
breaking change but hardly anyone notices" and "we do a breaking change and
everyone hears about it". The accident wasn't that some code broke, the accident
was that so much code broke. As you say, we have minor breaking changes fairly
regularly, and yet all the examples you presented of people being upset were
from this one case where we screwed up. I think that shows that generally, the
process works: we can do minor breaking changes without disrupting the
ecosystem, and we can generally predict pretty well whether a change will
disrupt the ecosystem. (In this case, we actually got the prediction and it was
right! It predicted significant ecosystem breakage. But then diffusion of
responsibility happened and nobody acted on that data.)
And yes, *technically* that change was permitted as there's an exception in the
stability RFC for such type ambiguity changes. However, we're not trying to be
"technically right", we're trying to do the right thing for the ecosystem, and
the way this went, we clearly didn't do the right thing. If we had just waited
another 3 or 4 Rust releases before rolling out this change, the impact would
have been a lot smaller, and you likely would never have heard about this.
(I'm saying "we" here since I am, to an extent, representing the Rust project in
this discussion. I can't actually speak for the Rust project, so these opinions
are my own. I also was not involved in any part of the "time" debacle.)
> Another concern I have is with Rust editions. It is
> a well defined way of having language "versions",
> and it does have automated conversion tools,
> and Rust libraries choose themselves which
> edition of Rust that they are using, independent
> of the version of the compiler.
>
> However, there are still some significant changes
> to the language between editions, and that means
> that to determine the correctness of Rust code, you
> must know which edition it is written for.
There exist corner cases where that is true, yes. They are quite rare. Congrats
on finding one! But you hardly ever see such examples in practice. As above,
it's important to think of these things quantitatively, not qualitatively.
Kind regards,
Ralf
>
> For instance, does this code have a deadlock?
>
> fn f(value: &RwLock<Option<bool>>) {
> if let Some(x) = *value.read().unwrap() {
> println!("value is {x}");
> } else {
> let mut v = value.write().unwrap();
> if v.is_none() {
> *v = Some(true);
> }
> }
> }
>
> The answer is that it depends on whether it is
> interpreted as being in Rust edition 2021 or
> Rust edition 2024. This is not as such an
> issue for upgrading, since there are automated
> conversion tools. But having semantic
> changes like this means that programmers must
> be aware of the edition that code is written in, and
> when applicable, know the different semantics of
> multiple editions. Rust editions are published every 3
> years, containing new semantic changes typically.
>
> There are editions Rust 2015, Rust 2018, Rust 2021,
> Rust 2024.
>
> Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-03-04 18:24 ` Ralf Jung
@ 2025-03-06 18:49 ` Ventura Jack
0 siblings, 0 replies; 194+ messages in thread
From: Ventura Jack @ 2025-03-06 18:49 UTC (permalink / raw)
To: Ralf Jung
Cc: Kent Overstreet, Miguel Ojeda, Gary Guo, torvalds, airlied,
boqun.feng, david.laight.linux, ej, gregkh, hch, hpa, ksummit,
linux-kernel, rust-for-linux
On Tue, Mar 4, 2025 at 11:24 AM Ralf Jung <post@ralfj.de> wrote:
>
> Hi all,
>
> >>> The time crate breaking example above does not
> >>> seem nice.
> >>
> >> The time issue is like the biggest such issue we had ever, and indeed that did
> >> not go well. We should have given the ecosystem more time to update to newer
> >> versions of the time crate, which would have largely mitigated the impact of
> >> this. A mistake was made, and a *lot* of internal discussion followed to
> >> minimize the chance of this happening again. I hope you don't take that accident
> >> as being representative of regular Rust development.
> >
> > Was it an accident? I thought the breakage was intentional,
> > and in line with Rust's guarantees on backwards
> > compatibility, since it was related to type inference,
> > and Rust is allowed to do breaking changes for that
> > according to its guarantees as I understand it.
> > Or do you mean that it was an accident that better
> > mitigation was not done in advance, like you describe
> > with giving the ecosystem more time to update?
>
> It was an accident. We have an established process for making such changes while
> keeping the ecosystem impact to a minimum, but mistakes were made and so the
> ecosystem impact was beyond what we'd be willing to accept.
>
> The key to understand here that there's a big difference between "we do a
> breaking change but hardly anyone notices" and "we do a breaking change and
> everyone hears about it". The accident wasn't that some code broke, the accident
> was that so much code broke. As you say, we have minor breaking changes fairly
> regularly, and yet all the examples you presented of people being upset were
> from this one case where we screwed up. I think that shows that generally, the
> process works: we can do minor breaking changes without disrupting the
> ecosystem, and we can generally predict pretty well whether a change will
> disrupt the ecosystem. (In this case, we actually got the prediction and it was
> right! It predicted significant ecosystem breakage. But then diffusion of
> responsibility happened and nobody acted on that data.)
>
> And yes, *technically* that change was permitted as there's an exception in the
> stability RFC for such type ambiguity changes. However, we're not trying to be
> "technically right", we're trying to do the right thing for the ecosystem, and
> the way this went, we clearly didn't do the right thing. If we had just waited
> another 3 or 4 Rust releases before rolling out this change, the impact would
> have been a lot smaller, and you likely would never have heard about this.
>
> (I'm saying "we" here since I am, to an extent, representing the Rust project in
> this discussion. I can't actually speak for the Rust project, so these opinions
> are my own. I also was not involved in any part of the "time" debacle.)
These comments claim that other things went wrong as well as
I understand it.
https://internals.rust-lang.org/t/type-inference-breakage-in-1-80-has-not-been-handled-well/21374
"There has been no public communication about this.
There were no future-incompat warnings. The affected
crates weren't yanked. There wasn't even a blog post
announcing the problem ahead of time and urging users
to update the affected dependency. Even the 1.80 release
announcement didn't say a word about the incompatibility
with one of the most used Rust crates."
https://internals.rust-lang.org/t/type-inference-breakage-in-1-80-has-not-been-handled-well/21374/9
"Why yank?
These crates no longer work on any supported Rust version
(which is 1.80, because the Rust project doesn't support past
versions). They're permanently defunct.
It makes Cargo alert users of the affected versions that
there's a problem with them.
It prevents new users from locking to the broken versions.
and if yanking of them seems like a too drastic measure
or done too soon, then breaking them was also done too
hard too soon."
And the time crate issue happened less than a year ago.
One thing that confuses me is that a previous issue, said to
be similar to the time crate issue, was rejected in 2020, and
then some were considering in 2024 to do that one as well
despite it possibly having similar breakage.
https://internals.rust-lang.org/t/type-inference-breakage-in-1-80-has-not-been-handled-well/21374/19
"On the other hand, @dtolnay, who objected to
impl AsRef for Cow<'_, str> on the grounds of
type inference breakage, announced that the libs
team explictly decided to break time's type inference,
which is inconsistent. But if this was deliberate and
deemed a good outcome, perhaps that AsRef impl
should be reconsidered, after all?"
https://github.com/rust-lang/rust/pull/73390
There have been other issues as well. I searched through.
https://github.com/rust-lang/rust/issues?q=label%3A%22regression-from-stable-to-stable%22%20sort%3Acomments-desc%20
"Stable to stable regression", and a number of issues show up.
Most of these do not seem to be intentional breakage, to be fair.
Some of the issues that are relatively more recent, as in from
2020 and later, include.
https://github.com/rust-lang/rust/issues/89195
"Compilation appears to loop indefinitely"
https://github.com/tokio-rs/axum/issues/200#issuecomment-948888360
"I ran into the same problem of extremely slow
compile times on 1.56, both sondr3/cv-aas and
sondr3/web take forever to compile."
This one started as a nightly regression, but was changed
to "stable to stable regression".
https://github.com/rust-lang/rust/issues/89601
"nightly-2021-09-03: Compiler hang in project with a
lot of axum crate routes"
This one is from 2023, still open, though it may have been
solved or mitigated later for some cases.
https://github.com/rust-lang/rust/issues/115283
"Upgrade from 1.71 to 1.72 has made compilation
time of my async-heavy actix server 350 times
slower (from under 5s to 30 minutes, on a 32GB M1
Max CPU)."
This one is from 2020, still open, though with mitigation
and fixes for some cases as I understand it. 35 thumbs up.
https://github.com/rust-lang/rust/issues/75992
"I upgraded from 1.45 to 1.46 today and a crate
I'm working on seems to hang forever while compiling."
Some of the issues may be related to holes in the
type system, and therefore may be fundamentally
difficult to fix. I can imagine that there might be
some examples that are similar for C++ projects,
but C++ has a less advanced type system than Rust,
with no advanced solver, so I would guess that there
are fewer such examples for C++. And a project
can switch to a different C++ compiler. Hopefully
gccrs will be ready in the near future such that
Rust projects can do similar switching. Though as I
understand it, a lot of the type checking
implementation will be shared between rustc and
gccrs. For C, the language should be so simple that
these kinds of issues are very rare or never occurs.
> > Another concern I have is with Rust editions. It is
> > a well defined way of having language "versions",
> > and it does have automated conversion tools,
> > and Rust libraries choose themselves which
> > edition of Rust that they are using, independent
> > of the version of the compiler.
> >
> > However, there are still some significant changes
> > to the language between editions, and that means
> > that to determine the correctness of Rust code, you
> > must know which edition it is written for.
>
> There exist corner cases where that is true, yes. They are quite rare. Congrats
> on finding one! But you hardly ever see such examples in practice. As above,
> it's important to think of these things quantitatively, not qualitatively.
What do you mean "congrats"?
I think that one should consider both "quantitatively"
and also "qualitatively".
I do not know how rare they are. One can go through the changes
in the Rust editions guide and look at them. A few more I found.
I should stress that these issues have automated upgrading or
lints for them. For some of the Rust editions changes, there is
no automated upgrade tools, only lint tools.
https://doc.rust-lang.org/edition-guide/rust-2021/disjoint-capture-in-closures.html
"Changing the variables captured by a closure
can cause programs to change behavior or to stop
compiling in two cases:
changes to drop order, or when destructors run (details);
changes to which traits a closure implements (details)."
https://doc.rust-lang.org/edition-guide/rust-2024/never-type-fallback.html
"In some cases your code might depend on the
fallback type being (), so this can cause compilation
errors or changes in behavior."
I am not sure whether this has changed behavior
between editions.
https://doc.rust-lang.org/edition-guide/rust-2024/rpit-lifetime-capture.html
"Without this use<> bound, in Rust 2024, the
opaque type would capture the 'a lifetime
parameter. By adding this bound, the migration
lint preserves the existing semantics."
As far as I can tell, there are more changes in the
Rust 2024 edition than in the previous editions.
Will future Rust editions, like Rust edition 2027,
have even more changes, including more with
semantic changes?
One way to avoid some of the issues with having
to understand and keep in mind the semantic
differences between Rust editions, might be
to always upgrade a Rust project to the most
recent Rust edition, before attempting to do
maintenance or development on that project.
But upgrading to the next Rust edition might
be a fair bit of work in some cases, and require
understanding the semantic differences
between editions in some cases. Especially when
macros are involved, as I understand it. The
migration guides often have a number of steps
involved, and the migration may sometimes be
so complex that the migration is done gradually.
This guide said that upgrading from 2021 to
2024 was not a lot of work for a specific project
as I understand it, but it was still done gradually.
https://codeandbitters.com/rust-2024-upgrade/
Learning materials and documentation might also
need to be updated.
I really hope that Rust edition 2027 will have fewer,
not more, semantic changes. Rust edition 2024
seems to me to have had more semantic changes
compared to previous editions.
If the Linux kernel had 1 million LOC of Rust, and
it was desired to upgrade to a new edition, how
might that look like? Or, would the kernel just let
different Rust codebases have different editions?
Rust does enable Rust crates with different
editions to interact, as I understand it, but
at the very least, one would have to be careful
with remembering what edition one is working
in, and what the semantics are for that edition.
Does upgrading to a new edition potentially
require understanding a specific project,
or can it always be done without knowing or
understanding the specific codebase?
There are not always automated tools available
for upgrading, sometimes only lints are
available, as I understand it. Would upgrading
a Linux kernel driver written in Rust to a new
edition require understanding that driver?
If yes, it might be easier to let drivers stay
on older Rust editions in some cases.
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
* Re: C aggregate passing (Rust kernel policy)
2025-02-26 23:16 ` Linus Torvalds
` (2 preceding siblings ...)
2025-02-27 18:33 ` Ralf Jung
@ 2025-03-06 19:16 ` Ventura Jack
3 siblings, 0 replies; 194+ messages in thread
From: Ventura Jack @ 2025-03-06 19:16 UTC (permalink / raw)
To: Linus Torvalds
Cc: Kent Overstreet, Martin Uecker, Ralf Jung, Paul E. McKenney,
Alice Ryhl, Gary Guo, airlied, boqun.feng, david.laight.linux, ej,
gregkh, hch, hpa, ksummit, linux-kernel, miguel.ojeda.sandonis,
rust-for-linux
On Wed, Feb 26, 2025 at 4:17 PM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> On Wed, 26 Feb 2025 at 14:27, Kent Overstreet <kent.overstreet@linux.dev> wrote:
> >
> > This is another one that's entirely eliminated due to W^X references.
>
> Are you saying rust cannot have global flags?
>
> That seems unlikely. And broken if so.
>
> > IOW: if you're writing code where rematerializing reads is even a
> > _concern_ in Rust, then you had to drop to unsafe {} to do it - and your
> > code is broken, and yes it will have UB.
>
> If you need to drop to unsafe mode just to read a global flag that may
> be set concurrently, you're doing something wrong as a language
> designer.
>
> And if your language then rematerializes reads, the language is shit.
>
> Really.
>
> Linus
Rust does allow global mutable flags, but some kinds of
them are very heavily discouraged, even in unsafe Rust.
https://doc.rust-lang.org/edition-guide/rust-2024/static-mut-references.html
Best, VJ.
^ permalink raw reply [flat|nested] 194+ messages in thread
end of thread, other threads:[~2025-03-06 19:16 UTC | newest]
Thread overview: 194+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-22 10:06 C aggregate passing (Rust kernel policy) Ventura Jack
2025-02-22 14:15 ` Gary Guo
2025-02-22 15:03 ` Ventura Jack
2025-02-22 18:54 ` Kent Overstreet
2025-02-22 19:18 ` Linus Torvalds
2025-02-22 20:00 ` Kent Overstreet
2025-02-22 20:54 ` H. Peter Anvin
2025-02-22 21:22 ` Kent Overstreet
2025-02-22 21:46 ` Linus Torvalds
2025-02-22 22:34 ` Kent Overstreet
2025-02-22 23:56 ` Jan Engelhardt
2025-02-22 22:12 ` David Laight
2025-02-22 22:46 ` Kent Overstreet
2025-02-22 23:50 ` H. Peter Anvin
2025-02-23 0:06 ` Kent Overstreet
2025-02-22 21:22 ` Linus Torvalds
2025-02-23 15:30 ` Ventura Jack
2025-02-23 16:28 ` David Laight
2025-02-24 0:27 ` Gary Guo
2025-02-24 9:57 ` Ventura Jack
2025-02-24 10:31 ` Benno Lossin
2025-02-24 12:21 ` Ventura Jack
2025-02-24 12:47 ` Benno Lossin
2025-02-24 16:57 ` Ventura Jack
2025-02-24 22:03 ` Benno Lossin
2025-02-24 23:04 ` Ventura Jack
2025-02-25 22:38 ` Benno Lossin
2025-02-25 22:47 ` Miguel Ojeda
2025-02-25 23:03 ` Benno Lossin
2025-02-24 12:58 ` Theodore Ts'o
2025-02-24 14:47 ` Miguel Ojeda
2025-02-24 14:54 ` Miguel Ojeda
2025-02-24 16:42 ` Philip Herron
2025-02-25 15:55 ` Ventura Jack
2025-02-25 17:30 ` Arthur Cohen
2025-02-26 11:38 ` Ralf Jung
2025-02-24 15:43 ` Miguel Ojeda
2025-02-24 17:24 ` Kent Overstreet
2025-02-25 16:12 ` Alice Ryhl
2025-02-25 17:21 ` Ventura Jack
2025-02-25 17:36 ` Alice Ryhl
2025-02-25 18:16 ` H. Peter Anvin
2025-02-25 20:21 ` Kent Overstreet
2025-02-25 20:37 ` H. Peter Anvin
2025-02-26 13:03 ` Ventura Jack
2025-02-26 13:53 ` Miguel Ojeda
2025-02-26 14:07 ` Ralf Jung
2025-02-26 14:26 ` James Bottomley
2025-02-26 14:37 ` Ralf Jung
2025-02-26 14:39 ` Greg KH
2025-02-26 14:45 ` James Bottomley
2025-02-26 16:00 ` Steven Rostedt
2025-02-26 16:42 ` James Bottomley
2025-02-26 16:47 ` Kent Overstreet
2025-02-26 16:57 ` Steven Rostedt
2025-02-26 17:41 ` Kent Overstreet
2025-02-26 17:47 ` Steven Rostedt
2025-02-26 22:07 ` Josh Poimboeuf
2025-03-02 12:19 ` David Laight
2025-02-26 17:11 ` Miguel Ojeda
2025-02-26 17:42 ` Kent Overstreet
2025-02-26 12:36 ` Ventura Jack
2025-02-26 13:52 ` Miguel Ojeda
2025-02-26 15:21 ` Ventura Jack
2025-02-26 16:06 ` Ralf Jung
2025-02-26 17:49 ` Miguel Ojeda
2025-02-26 18:36 ` Ventura Jack
2025-02-26 14:14 ` Ralf Jung
2025-02-26 15:40 ` Ventura Jack
2025-02-26 16:10 ` Ralf Jung
2025-02-26 16:50 ` Ventura Jack
2025-02-26 21:39 ` Ralf Jung
2025-02-27 15:11 ` Ventura Jack
2025-02-27 15:32 ` Ralf Jung
2025-02-25 18:54 ` Linus Torvalds
2025-02-25 19:47 ` Kent Overstreet
2025-02-25 20:25 ` Linus Torvalds
2025-02-25 20:55 ` Kent Overstreet
2025-02-25 21:24 ` Linus Torvalds
2025-02-25 23:34 ` Kent Overstreet
2025-02-26 11:57 ` Gary Guo
2025-02-27 14:43 ` Ventura Jack
2025-02-26 14:26 ` Ventura Jack
2025-02-25 22:45 ` Miguel Ojeda
2025-02-26 0:05 ` Miguel Ojeda
2025-02-25 22:42 ` Miguel Ojeda
2025-02-26 14:01 ` Ralf Jung
2025-02-26 13:54 ` Ralf Jung
2025-02-26 17:59 ` Linus Torvalds
2025-02-26 19:01 ` Paul E. McKenney
2025-02-26 20:00 ` Martin Uecker
2025-02-26 21:14 ` Linus Torvalds
2025-02-26 21:21 ` Linus Torvalds
2025-02-26 22:54 ` David Laight
2025-02-27 0:35 ` Paul E. McKenney
2025-02-26 21:26 ` Steven Rostedt
2025-02-26 21:37 ` Steven Rostedt
2025-02-26 21:42 ` Linus Torvalds
2025-02-26 21:56 ` Steven Rostedt
2025-02-26 22:13 ` Steven Rostedt
2025-02-26 22:22 ` Linus Torvalds
2025-02-26 22:35 ` Steven Rostedt
2025-02-26 23:18 ` Linus Torvalds
2025-02-26 23:28 ` Steven Rostedt
2025-02-27 0:04 ` Linus Torvalds
2025-02-27 20:47 ` David Laight
2025-02-27 21:33 ` Steven Rostedt
2025-02-28 21:29 ` Paul E. McKenney
2025-02-27 21:41 ` Paul E. McKenney
2025-02-27 22:20 ` David Laight
2025-02-27 22:40 ` Paul E. McKenney
2025-02-28 7:44 ` Ralf Jung
2025-02-28 15:41 ` Kent Overstreet
2025-02-28 15:46 ` Boqun Feng
2025-02-28 16:04 ` Kent Overstreet
2025-02-28 16:13 ` Boqun Feng
2025-02-28 16:21 ` Kent Overstreet
2025-02-28 16:40 ` Boqun Feng
2025-03-04 18:12 ` Ralf Jung
2025-02-26 22:27 ` Kent Overstreet
2025-02-26 23:16 ` Linus Torvalds
2025-02-27 0:17 ` Kent Overstreet
2025-02-27 0:26 ` comex
2025-02-27 18:33 ` Ralf Jung
2025-02-27 19:15 ` Linus Torvalds
2025-02-27 19:55 ` Kent Overstreet
2025-02-27 20:28 ` Linus Torvalds
2025-02-28 7:53 ` Ralf Jung
2025-03-06 19:16 ` Ventura Jack
2025-02-27 4:18 ` Martin Uecker
2025-02-27 5:52 ` Linus Torvalds
2025-02-27 6:56 ` Martin Uecker
2025-02-27 14:29 ` Steven Rostedt
2025-02-27 17:35 ` Paul E. McKenney
2025-02-27 18:13 ` Kent Overstreet
2025-02-27 19:10 ` Paul E. McKenney
2025-02-27 18:00 ` Ventura Jack
2025-02-27 18:44 ` Ralf Jung
2025-02-27 14:21 ` Ventura Jack
2025-02-27 15:27 ` H. Peter Anvin
2025-02-28 8:08 ` Ralf Jung
2025-02-28 8:32 ` Martin Uecker
2025-02-26 20:25 ` Kent Overstreet
2025-02-26 20:34 ` Andy Lutomirski
2025-02-26 22:45 ` David Laight
2025-02-22 19:41 ` Miguel Ojeda
2025-02-22 20:49 ` Kent Overstreet
2025-02-26 11:34 ` Ralf Jung
2025-02-26 14:57 ` Ventura Jack
2025-02-26 16:32 ` Ralf Jung
2025-02-26 18:09 ` Ventura Jack
2025-02-26 22:28 ` Ralf Jung
2025-02-26 23:08 ` David Laight
2025-02-27 13:55 ` Ralf Jung
2025-02-27 17:33 ` Ventura Jack
2025-02-27 17:58 ` Ralf Jung
2025-02-27 19:06 ` Ventura Jack
2025-02-27 19:45 ` Ralf Jung
2025-02-27 20:22 ` Kent Overstreet
2025-02-27 22:18 ` David Laight
2025-02-27 23:18 ` Kent Overstreet
2025-02-28 7:38 ` Ralf Jung
2025-02-28 20:48 ` Ventura Jack
2025-02-28 20:41 ` Ventura Jack
2025-02-28 22:13 ` Geoffrey Thomas
2025-03-01 14:19 ` Ventura Jack
2025-03-04 18:24 ` Ralf Jung
2025-03-06 18:49 ` Ventura Jack
2025-02-27 17:58 ` Miguel Ojeda
2025-02-27 19:25 ` Ventura Jack
2025-02-26 19:07 ` Martin Uecker
2025-02-26 19:23 ` Ralf Jung
2025-02-26 20:22 ` Martin Uecker
[not found] <CAFJgqgRZ1w0ONj2wbcczx2=boXYHoLOd=-ke7tHGBAcifSfPUw@mail.gmail.com>
2025-02-25 15:42 ` H. Peter Anvin
2025-02-25 16:45 ` Ventura Jack
-- strict thread matches above, loose matches on Subject: below --
2025-02-09 20:56 Rust kernel policy Miguel Ojeda
2025-02-18 16:08 ` Christoph Hellwig
2025-02-18 18:46 ` Miguel Ojeda
2025-02-18 21:49 ` H. Peter Anvin
2025-02-18 22:54 ` Miguel Ojeda
2025-02-19 0:58 ` H. Peter Anvin
2025-02-19 3:04 ` Boqun Feng
2025-02-19 5:39 ` Greg KH
2025-02-20 12:28 ` Jan Engelhardt
2025-02-20 12:37 ` Greg KH
2025-02-20 13:23 ` H. Peter Anvin
2025-02-20 15:17 ` C aggregate passing (Rust kernel policy) Jan Engelhardt
2025-02-20 16:46 ` Linus Torvalds
2025-02-20 20:34 ` H. Peter Anvin
2025-02-21 8:31 ` HUANG Zhaobin
2025-02-21 18:34 ` David Laight
2025-02-21 19:12 ` Linus Torvalds
2025-02-21 20:07 ` comex
2025-02-21 21:45 ` David Laight
2025-02-22 6:32 ` Willy Tarreau
2025-02-22 6:37 ` Willy Tarreau
2025-02-22 8:41 ` David Laight
2025-02-22 9:11 ` Willy Tarreau
2025-02-21 20:06 ` Jan Engelhardt
2025-02-21 20:23 ` Laurent Pinchart
2025-02-21 20:24 ` Laurent Pinchart
2025-02-21 22:02 ` David Laight
2025-02-21 22:13 ` Bart Van Assche
2025-02-22 5:56 ` comex
2025-02-21 20:26 ` Linus Torvalds
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).