* FYI: path walking optimizations pending for 6.11
@ 2024-06-19 20:25 Linus Torvalds
2024-06-19 20:45 ` Matthew Wilcox
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Linus Torvalds @ 2024-06-19 20:25 UTC (permalink / raw)
To: Christian Brauner, Al Viro
Cc: linux-fsdevel, the arch/x86 maintainers, Linux ARM,
Linux Kernel Mailing List
I already mentioned these to Al, so he has seen most of them, because
I wanted to make sure he was ok with the link_path_walk updates. But
since he was ok (with a few comments), I cleaned things up and
separated things into branches, and here's a heads-up for a wider
audience in case anybody cares.
This all started from me doing profiling on arm64, and just being
annoyed by the code generation and some - admittedly mostly pretty
darn minor - performance issues.
It started with the arm64 user access code, moved on to
__d_lookup_rcu(), and then extended into link_path_walk(), which
together end up being the most noticeable parts of path lookup.
The user access code is mostly for strncpy_from_user() - which is the
main way the vfs layer gets the pathnames. vfs people probably don't
really care - arm people cc'd, although they've seen most of this in
earlier iterations (the minor word-at-a-time tweak is new). Same goes
for x86 people for the minor changes on that side.
I've pushed out four branches based on 6.10-rc4, because I think it's
pretty ready. But I'll rebase them if people have commentary that
needs addressing, so don't treat them as some kind of stable base yet.
My plan is to merge them during the next merge window unless somebody
screams.
The branches are:
arm64-uaccess:
arm64: access_ok() optimization
arm64: start using 'asm goto' for put_user()
arm64: start using 'asm goto' for get_user() when available
link_path_walk:
vfs: link_path_walk: move more of the name hashing into hash_name()
vfs: link_path_walk: improve may_lookup() code generation
vfs: link_path_walk: do '.' and '..' detection while hashing
vfs: link_path_walk: clarify and improve name hashing interface
vfs: link_path_walk: simplify name hash flow
runtime-constants:
arm64: add 'runtime constant' support
runtime constants: add x86 architecture support
runtime constants: add default dummy infrastructure
vfs: dcache: move hashlen_hash() from callers into d_hash()
word-at-a-time:
arm64: word-at-a-time: improve byte count calculations for LE
x86-64: word-at-a-time: improve byte count calculations
The arm64-uaccess branch is just what it says, and makes a big
difference in strncpy_from_user(). The "access_ok()" change is
certainly debatable, but I think needs to be done for sanity. I think
it's one of those "let's do it, and if it causes problems we'll have
to fix things up" things.
The link_path_walk branch is the one that changes the vfs layer the
most, but it's really mostly just a series of "fix calling conventions
of 'hash_name()' to be better".
The runtime-constants thing most people have already seen, it just
makes d_hash() avoid all indirect memory accesses.
And word-at-a-time just fixes code generation for both arm64 and
x86-64 to use better sequences.
None of this should be a huge deal, but together they make the
profiles for __d_lookup_rcu(), link_path_walk() and
strncpy_from_user() look pretty much optimal.
And by "optimal" I mean "within the confines of what they do".
For example, making d_hash() avoid indirection just means that now
pretty much _all_ the cost of __d_lookup_rcu() is in the cache misses
on the hash table itself. Which was always the bulk of it. And on my
arm64 machine, it turns out that the best optimization for the load I
tested would be to make that hash table smaller to actually be a bit
denser in the cache, But that's such a load-dependent optimization
that I'm not doing this.
Tuning the hash table size or data structure cacheline layouts might
be worthwhile - and likely a bigger deal - but is _not_ what these
patches are about.
Linus
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: FYI: path walking optimizations pending for 6.11
2024-06-19 20:25 FYI: path walking optimizations pending for 6.11 Linus Torvalds
@ 2024-06-19 20:45 ` Matthew Wilcox
2024-06-19 22:08 ` Linus Torvalds
2024-06-20 7:53 ` Arnd Bergmann
2024-07-09 10:04 ` Mark Rutland
2 siblings, 1 reply; 11+ messages in thread
From: Matthew Wilcox @ 2024-06-19 20:45 UTC (permalink / raw)
To: Linus Torvalds
Cc: Christian Brauner, Al Viro, linux-fsdevel,
the arch/x86 maintainers, Linux ARM, Linux Kernel Mailing List
On Wed, Jun 19, 2024 at 01:25:02PM -0700, Linus Torvalds wrote:
> For example, making d_hash() avoid indirection just means that now
> pretty much _all_ the cost of __d_lookup_rcu() is in the cache misses
> on the hash table itself. Which was always the bulk of it. And on my
> arm64 machine, it turns out that the best optimization for the load I
> tested would be to make that hash table smaller to actually be a bit
> denser in the cache, But that's such a load-dependent optimization
> that I'm not doing this.
>
> Tuning the hash table size or data structure cacheline layouts might
> be worthwhile - and likely a bigger deal - but is _not_ what these
> patches are about.
Funnily, I'm working on rosebush v2 today. It's in no shape to send out
(it's failing ~all of its selftests) but *should* greatly improve the
cache friendliness of the hash table. And it's being written with the
dcache as its first customer.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: FYI: path walking optimizations pending for 6.11
2024-06-19 20:45 ` Matthew Wilcox
@ 2024-06-19 22:08 ` Linus Torvalds
2024-06-20 18:53 ` Kent Overstreet
2024-06-21 20:04 ` Matthew Wilcox
0 siblings, 2 replies; 11+ messages in thread
From: Linus Torvalds @ 2024-06-19 22:08 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Christian Brauner, Al Viro, linux-fsdevel,
the arch/x86 maintainers, Linux ARM, Linux Kernel Mailing List
On Wed, 19 Jun 2024 at 13:45, Matthew Wilcox <willy@infradead.org> wrote:
>
> Funnily, I'm working on rosebush v2 today. It's in no shape to send out
> (it's failing ~all of its selftests) but *should* greatly improve the
> cache friendliness of the hash table. And it's being written with the
> dcache as its first customer.
I'm interested to see if you can come up with something decent, but
I'm not hugely optimistic.
From what I saw, you planned on comparing with rhashtable hash chains of 10.
But that's not what the dentry cache uses at all. rhashtable is way
too slow. It's been ages since I ran the numbers, but the dcache array
is just sized to be "large enough".
In fact, my comment about my workload being better if the hash table
was smaller was because we really are pretty aggressive with the
dcache hash table size. I think our scaling factor is 13 - as in "one
entry per 8kB of memory".
Which is almost certainly wasting memory, but name lookup really does
show up as a hot thing on many loads.
Anyway, what it means is that the dcache hash chain is usually *one*.
Not ten. And has none of the rhashtable overheads.
So if your "use linear lookups to make the lookup faster" depends on
comparing with ten entry chains of rhashtable, you might be in for a
very nasty surprise.
In my profiles, the first load of the hash table tends to be the
expensive one. Not the chain following.
Of course, my profiles are not only just one random load, they are
also skewed by the fact that I reboot so much. So maybe my dentry
cache just doesn't grow sufficiently big during my testing, and thus
my numbers are skewed even for just my own loads.
Benchmarking is hard.
Anyway, that was just a warning that if you're comparing against
rhashtable, you have almost certainly already lost before you even got
started.
Linus
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: FYI: path walking optimizations pending for 6.11
2024-06-19 22:08 ` Linus Torvalds
@ 2024-06-20 18:53 ` Kent Overstreet
2024-06-21 20:04 ` Matthew Wilcox
1 sibling, 0 replies; 11+ messages in thread
From: Kent Overstreet @ 2024-06-20 18:53 UTC (permalink / raw)
To: Linus Torvalds
Cc: Matthew Wilcox, Christian Brauner, Al Viro, linux-fsdevel,
the arch/x86 maintainers, Linux ARM, Linux Kernel Mailing List
On Wed, Jun 19, 2024 at 03:08:47PM -0700, Linus Torvalds wrote:
> On Wed, 19 Jun 2024 at 13:45, Matthew Wilcox <willy@infradead.org> wrote:
> >
> > Funnily, I'm working on rosebush v2 today. It's in no shape to send out
> > (it's failing ~all of its selftests) but *should* greatly improve the
> > cache friendliness of the hash table. And it's being written with the
> > dcache as its first customer.
>
> I'm interested to see if you can come up with something decent, but
> I'm not hugely optimistic.
>
> From what I saw, you planned on comparing with rhashtable hash chains of 10.
>
> But that's not what the dentry cache uses at all. rhashtable is way
> too slow. It's been ages since I ran the numbers, but the dcache array
> is just sized to be "large enough".
>
> In fact, my comment about my workload being better if the hash table
> was smaller was because we really are pretty aggressive with the
> dcache hash table size. I think our scaling factor is 13 - as in "one
> entry per 8kB of memory".
>
> Which is almost certainly wasting memory, but name lookup really does
> show up as a hot thing on many loads.
>
> Anyway, what it means is that the dcache hash chain is usually *one*.
> Not ten. And has none of the rhashtable overheads.
>
> So if your "use linear lookups to make the lookup faster" depends on
> comparing with ten entry chains of rhashtable, you might be in for a
> very nasty surprise.
>
> In my profiles, the first load of the hash table tends to be the
> expensive one. Not the chain following.
>
> Of course, my profiles are not only just one random load, they are
> also skewed by the fact that I reboot so much. So maybe my dentry
> cache just doesn't grow sufficiently big during my testing, and thus
> my numbers are skewed even for just my own loads.
>
> Benchmarking is hard.
>
> Anyway, that was just a warning that if you're comparing against
> rhashtable, you have almost certainly already lost before you even got
> started.
The main room I see for improvement is that rhashtable requires two
dependent loads to get to the hash slot - i.e. stuffing the table size
in the low bits of the table pointer.
Unfortunately, the hash seed is also in the table.
If only we had a way to read/write 16 bytes atomically...
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: FYI: path walking optimizations pending for 6.11
2024-06-19 22:08 ` Linus Torvalds
2024-06-20 18:53 ` Kent Overstreet
@ 2024-06-21 20:04 ` Matthew Wilcox
2024-06-21 20:53 ` Linus Torvalds
1 sibling, 1 reply; 11+ messages in thread
From: Matthew Wilcox @ 2024-06-21 20:04 UTC (permalink / raw)
To: Linus Torvalds
Cc: Christian Brauner, Al Viro, linux-fsdevel,
the arch/x86 maintainers, Linux ARM, Linux Kernel Mailing List,
kernel test robot
On Wed, Jun 19, 2024 at 03:08:47PM -0700, Linus Torvalds wrote:
> On Wed, 19 Jun 2024 at 13:45, Matthew Wilcox <willy@infradead.org> wrote:
> >
> > Funnily, I'm working on rosebush v2 today. It's in no shape to send out
> > (it's failing ~all of its selftests) but *should* greatly improve the
> > cache friendliness of the hash table. And it's being written with the
> > dcache as its first customer.
>
> I'm interested to see if you can come up with something decent, but
> I'm not hugely optimistic.
Well, I've now come up with something _working_. There are still things
to be fixed, but it might be interesting for a performance comparison.
So I've pushed it out to
http://git.infradead.org/?p=users/willy/pagecache.git;a=shortlog;h=refs/heads/rosebush
where I hope 01.org will pick up on it and run some performance tests.
> From what I saw, you planned on comparing with rhashtable hash chains of 10.
That was the comparison I made (and it turns out I misunderstood
rhashtable entirely; the length is where it does an emergency resize,
and usually its size is such that the average hash length is <1)
What I was reacting to in your email was this:
: And on my arm64 machine, it turns out that the best optimization for the
: load I tested would be to make that hash table smaller to actually be a
: bit denser in the cache, But that's such a load-dependent optimization
: that I'm not doing this.
And that's exactly what rosebush does; it starts out incredibly small
(512 bytes) and then resizes as the buckets overflow. So if you suspect
that a denser hashtable would give you better performance, then maybe
it'll help.
Or maybe not; it's not like I've done thorough testing.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: FYI: path walking optimizations pending for 6.11
2024-06-21 20:04 ` Matthew Wilcox
@ 2024-06-21 20:53 ` Linus Torvalds
0 siblings, 0 replies; 11+ messages in thread
From: Linus Torvalds @ 2024-06-21 20:53 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Christian Brauner, Al Viro, linux-fsdevel,
the arch/x86 maintainers, Linux ARM, Linux Kernel Mailing List,
kernel test robot
On Fri, 21 Jun 2024 at 13:04, Matthew Wilcox <willy@infradead.org> wrote:
>
> What I was reacting to in your email was this:
>
> : And on my arm64 machine, it turns out that the best optimization for the
> : load I tested would be to make that hash table smaller to actually be a
> : bit denser in the cache, But that's such a load-dependent optimization
> : that I'm not doing this.
>
> And that's exactly what rosebush does; it starts out incredibly small
> (512 bytes) and then resizes as the buckets overflow. So if you suspect
> that a denser hashtable would give you better performance, then maybe
> it'll help.
Well, I was more going "ok, on the exact load _I_ was running, it
would probably help to use a smaller hash table", but I suspect that
in real life our actual hash tables are better.
My benchmark is somewhat real-world in that yes, I benchmark what I
do. But what I do is ridiculously limited. Using git and building
kernels and running a web browser for email does not require 64GB of
RAM.
But that's what I have in what is now my "small" machine, literally
because I wanted to populate every memory channel.
Not because I needed the size, but because I wanted the memory channel
bandwidth.
IOW, my machines tend to be completely over-specced wrt memory. The
kernel build can use about as many cores as you can throw at it, but
even with multiple trees, and everything cached, and hundreds of
parallel compilers going, I just don't use that much RAM. The kernel
build system is pretty damn lean (ask the poor people who do GUI tools
with C++ and the situation changes, but the kernel build is actually
pretty good on resource use).
So the kernel - very reasonably - provisions me with a big hash table,
because I literally have memory to waste.
And it turns out that since _all_ I do on the arm64 box in particular
(it's headless, so not even a web browser) is to build kernels. I
could "tweak" the config for that.
But while it might benchmark better, it would likely not be better in reality.
I'm going to be on the road this weekend, but if you have something
that you think is past the "debug build" stage and is worth
benchmarking, I can try to run it on my machines next week.
Linus
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: FYI: path walking optimizations pending for 6.11
2024-06-19 20:25 FYI: path walking optimizations pending for 6.11 Linus Torvalds
2024-06-19 20:45 ` Matthew Wilcox
@ 2024-06-20 7:53 ` Arnd Bergmann
2024-06-20 17:25 ` Linus Torvalds
2024-07-09 10:04 ` Mark Rutland
2 siblings, 1 reply; 11+ messages in thread
From: Arnd Bergmann @ 2024-06-20 7:53 UTC (permalink / raw)
To: Linus Torvalds, Christian Brauner, Alexander Viro
Cc: linux-fsdevel, the arch/x86 maintainers, Linux ARM,
Linux Kernel Mailing List, Alexei Starovoitov
On Wed, Jun 19, 2024, at 22:25, Linus Torvalds wrote:
> The arm64-uaccess branch is just what it says, and makes a big
> difference in strncpy_from_user(). The "access_ok()" change is
> certainly debatable, but I think needs to be done for sanity. I think
> it's one of those "let's do it, and if it causes problems we'll have
> to fix things up" things.
I'm a bit worried about the access_ok() being so different from
the other architectures, after I previously saw all the ways
it could go wrong because of subtle differences.
I don't mind making the bit that makes the untagging
unconditional, and I can see how this improves code
generation. I've tried comparing your version against
the more conventional
static inline int access_ok(const void __user *p, unsigned long size)
{
return likely(__access_ok(untagged_addr(p), size));
}
Using gcc-13.2, I see your version is likely better in all
cases, but not by much: for the constant-length case, it
saves only one instruction (combining the untagging with the
limit), while for a variable length it avoids a branch.
On a 24MB kernel image, I see this add up to a size difference
of 12KB, while the total savings from avoiding the conditional
untagging are 76KB.
Do you see a measurable performance difference between your
version and the one above?
On a related note, I see that there is one caller of
__access_ok() in common code, and this was added in
d319f344561d ("mm: Fix copy_from_user_nofault().").
I think that one should just go back to using access_ok()
after your 6ccdc91d6af9 ("x86: mm: remove
architecture-specific 'access_ok()' define"). In the
current version, it otherwise misses the untagging
on arm64.
Arnd
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: FYI: path walking optimizations pending for 6.11
2024-06-20 7:53 ` Arnd Bergmann
@ 2024-06-20 17:25 ` Linus Torvalds
0 siblings, 0 replies; 11+ messages in thread
From: Linus Torvalds @ 2024-06-20 17:25 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Christian Brauner, Alexander Viro, linux-fsdevel,
the arch/x86 maintainers, Linux ARM, Linux Kernel Mailing List,
Alexei Starovoitov
On Thu, 20 Jun 2024 at 00:53, Arnd Bergmann <arnd@arndb.de> wrote:
>
> I don't mind making the bit that makes the untagging
> unconditional, and I can see how this improves code
> generation. I've tried comparing your version against
> the more conventional
>
> static inline int access_ok(const void __user *p, unsigned long size)
> {
> return likely(__access_ok(untagged_addr(p), size));
> }
Oh, I'd be ok with that.
That "access_ok()" thing was actually the first thing I did, before
doing all the asm goto fixes and making the arm64 "unsafe" user access
functions work. I may have gone a bit overboard when compensating for
all the other crap the generated code had.
That said, the size check really is of dubious value, and the bit
games did make the code nice and efficient.
But yeah, maybe I made it a bit *too* subtle in the process.
> On a related note, I see that there is one caller of
> __access_ok() in common code, and this was added in
> d319f344561d ("mm: Fix copy_from_user_nofault().").
Hmm. That _is_ ugly. But I do think that the untagging is very much a
per-thread state (well, it *should* be per-VM, but that is a whole
other discussion), and so the rationale for _why_ that code doesn't do
untagging is still very very true.
Yes, the x86 code no longer has a WARN for that case, but the arm64
code really *would* be horrible broken if the code just untagged based
on random thread data.
Of course, in the end that's just one more reason to consider the
current arm64 tagging model completely broken.
But my point is: copy_from_user_nofault() can be called from random
contexts, and as long as that is the case - and as long as we still
make the untagging some per-thread thing - that code must not do
untagging because it's in the wrong context to actually do that
correctly.
And no, the way the arm64 hardware setup works, none of this matters.
The arm64 untagging really *is* unconditional on a hw setup level,
which took me by surprise.
Linus
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: FYI: path walking optimizations pending for 6.11
2024-06-19 20:25 FYI: path walking optimizations pending for 6.11 Linus Torvalds
2024-06-19 20:45 ` Matthew Wilcox
2024-06-20 7:53 ` Arnd Bergmann
@ 2024-07-09 10:04 ` Mark Rutland
2024-07-09 14:28 ` Linus Torvalds
2 siblings, 1 reply; 11+ messages in thread
From: Mark Rutland @ 2024-07-09 10:04 UTC (permalink / raw)
To: Linus Torvalds
Cc: Christian Brauner, Al Viro, linux-fsdevel,
the arch/x86 maintainers, Linux ARM, Linux Kernel Mailing List
Hi Linus,
On Wed, Jun 19, 2024 at 01:25:02PM -0700, Linus Torvalds wrote:
> I've pushed out four branches based on 6.10-rc4, because I think it's
> pretty ready. But I'll rebase them if people have commentary that
> needs addressing, so don't treat them as some kind of stable base yet.
> My plan is to merge them during the next merge window unless somebody
> screams.
>
> The branches are:
>
> arm64-uaccess:
> arm64: access_ok() optimization
> arm64: start using 'asm goto' for put_user()
> arm64: start using 'asm goto' for get_user() when available
> runtime-constants:
> arm64: add 'runtime constant' support
> runtime constants: add x86 architecture support
> runtime constants: add default dummy infrastructure
> vfs: dcache: move hashlen_hash() from callers into d_hash()
Apologies, the arm64 branches/patches have been on my TODO list for
review/test/benchmark for the last couple of weeks, but I haven't had
both the time and machine availability to do so.
Looking at the arm64 runtime constants patch, I see there's a redundant
store in __runtime_fixup_16(), which I think is just a leftover from
applying the last roudn or feedback:
+/* 16-bit immediate for wide move (movz and movk) in bits 5..20 */
+static inline void __runtime_fixup_16(__le32 *p, unsigned int val)
+{
+ u32 insn = le32_to_cpu(*p);
+ insn &= 0xffe0001f;
+ insn |= (val & 0xffff) << 5;
+ *p = insn;
+ *p = cpu_to_le32(insn);
+}
... i.e. the first assignment to '*p' should go; the compiler should be
smart enough to elide it entirely, but it shouldn't be there.
For the sake of review, would you be happy to post the uaccess and
runtime-constants patches to the list again? I think there might be some
remaining issues with (real) PAN and we might need to do a bit more
preparatory work there.
Mark.
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: FYI: path walking optimizations pending for 6.11
2024-07-09 10:04 ` Mark Rutland
@ 2024-07-09 14:28 ` Linus Torvalds
2024-07-09 16:24 ` Linus Torvalds
0 siblings, 1 reply; 11+ messages in thread
From: Linus Torvalds @ 2024-07-09 14:28 UTC (permalink / raw)
To: Mark Rutland
Cc: Christian Brauner, Al Viro, linux-fsdevel,
the arch/x86 maintainers, Linux ARM, Linux Kernel Mailing List
On Tue, 9 Jul 2024 at 03:04, Mark Rutland <mark.rutland@arm.com> wrote:
>
> Looking at the arm64 runtime constants patch, I see there's a redundant
> store in __runtime_fixup_16(), which I think is just a leftover from
> applying the last roudn or feedback:
Duh, yes.
> For the sake of review, would you be happy to post the uaccess and
> runtime-constants patches to the list again? I think there might be some
> remaining issues with (real) PAN and we might need to do a bit more
> preparatory work there.
Sure. I'll fix that silly left-over store, and post again.
Linus
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: FYI: path walking optimizations pending for 6.11
2024-07-09 14:28 ` Linus Torvalds
@ 2024-07-09 16:24 ` Linus Torvalds
0 siblings, 0 replies; 11+ messages in thread
From: Linus Torvalds @ 2024-07-09 16:24 UTC (permalink / raw)
To: Mark Rutland
Cc: Christian Brauner, Al Viro, linux-fsdevel,
the arch/x86 maintainers, Linux ARM, Linux Kernel Mailing List
On Tue, 9 Jul 2024 at 07:28, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> > For the sake of review, would you be happy to post the uaccess and
> > runtime-constants patches to the list again? I think there might be some
> > remaining issues with (real) PAN and we might need to do a bit more
> > preparatory work there.
>
> Sure. I'll fix that silly left-over store, and post again.
I only posted the (unchanged) arm64-uaccess series and the (fixed,as
per your comment today) runtime-constants one. And I only posted it to
the linux-arm-kernel list, not wanting to bother everybody.
I have two other branches in my git tree if people care:
link_path_walk and word-at-a-time. The word-at-a-time one does touch
arm64 files too, but it's pretty trivial:
arch/arm64/include/asm/word-at-a-time.h | 11 +++--------
1 file changed, 3 insertions(+), 8 deletions(-)
and really only involves a better instruction choice. It really only
matters if you end up looking at the generated code of link_path_walk
and strncpy_from_user().
The commit message at the top of that branch is a lot more verbose
than the actual change, because I ended up just explaining the
different phases of the zero detection more than the actual trivial
change to it.
All four branches are available in my regular tree at
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
as 'arm64-uaccess', 'link_path_walk', 'runtime-constants', and 'word-at-a-time'.
But unlike my normal mainline branch, I still rebase these based on
feedback, so consider them unstable.
IOW, don't pull them into any tree to be used: just use "git fetch" to
look at the branches in your local tree instead, or use it for some
ephemeral testing branch.
Linus
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2024-07-09 16:25 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-19 20:25 FYI: path walking optimizations pending for 6.11 Linus Torvalds
2024-06-19 20:45 ` Matthew Wilcox
2024-06-19 22:08 ` Linus Torvalds
2024-06-20 18:53 ` Kent Overstreet
2024-06-21 20:04 ` Matthew Wilcox
2024-06-21 20:53 ` Linus Torvalds
2024-06-20 7:53 ` Arnd Bergmann
2024-06-20 17:25 ` Linus Torvalds
2024-07-09 10:04 ` Mark Rutland
2024-07-09 14:28 ` Linus Torvalds
2024-07-09 16:24 ` Linus Torvalds
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox