* [PATCH] x86/ioperm: Use atomic64_inc_return() in ksys_ioperm()
@ 2024-10-07 8:33 Uros Bizjak
2024-10-24 15:20 ` Dave Hansen
0 siblings, 1 reply; 9+ messages in thread
From: Uros Bizjak @ 2024-10-07 8:33 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Uros Bizjak, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin
Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref)
to use optimized implementation and ease register pressure around
the primitive for targets that implement optimized variant.
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
arch/x86/kernel/ioport.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/ioport.c b/arch/x86/kernel/ioport.c
index e2fab3ceb09f..6290dd120f5e 100644
--- a/arch/x86/kernel/ioport.c
+++ b/arch/x86/kernel/ioport.c
@@ -144,7 +144,7 @@ long ksys_ioperm(unsigned long from, unsigned long num, int turn_on)
* Update the sequence number to force a TSS update on return to
* user mode.
*/
- iobm->sequence = atomic64_add_return(1, &io_bitmap_sequence);
+ iobm->sequence = atomic64_inc_return(&io_bitmap_sequence);
return 0;
}
--
2.46.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH] x86/ioperm: Use atomic64_inc_return() in ksys_ioperm()
2024-10-07 8:33 Uros Bizjak
@ 2024-10-24 15:20 ` Dave Hansen
2024-10-24 16:20 ` Uros Bizjak
0 siblings, 1 reply; 9+ messages in thread
From: Dave Hansen @ 2024-10-24 15:20 UTC (permalink / raw)
To: Uros Bizjak, x86, linux-kernel
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin
On 10/7/24 01:33, Uros Bizjak wrote:
> Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref)
> to use optimized implementation and ease register pressure around
> the primitive for targets that implement optimized variant.
Ease register pressure at the end of a syscall?
I'll accept that we're doing this just as a matter of hygiene. But it's
a stretch to say there are any performance concerns whatsoever at the
end of the ioperm() syscall.
So what is the real reason for this patch?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] x86/ioperm: Use atomic64_inc_return() in ksys_ioperm()
2024-10-24 15:20 ` Dave Hansen
@ 2024-10-24 16:20 ` Uros Bizjak
2024-10-25 17:12 ` H. Peter Anvin
0 siblings, 1 reply; 9+ messages in thread
From: Uros Bizjak @ 2024-10-24 16:20 UTC (permalink / raw)
To: Dave Hansen
Cc: x86, linux-kernel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin
On Thu, Oct 24, 2024 at 5:21 PM Dave Hansen <dave.hansen@intel.com> wrote:
>
> On 10/7/24 01:33, Uros Bizjak wrote:
> > Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref)
> > to use optimized implementation and ease register pressure around
> > the primitive for targets that implement optimized variant.
>
> Ease register pressure at the end of a syscall?
>
> I'll accept that we're doing this just as a matter of hygiene. But it's
> a stretch to say there are any performance concerns whatsoever at the
> end of the ioperm() syscall.
>
> So what is the real reason for this patch?
Please see code dumps for i386, a target that implements atomic64_inc_return():
1a9: 8d 04 95 04 00 00 00 lea 0x4(,%edx,4),%eax
1b0: b9 00 00 00 00 mov $0x0,%ecx
1b1: R_386_32 .bss
1b5: 89 43 0c mov %eax,0xc(%ebx)
1b8: 31 d2 xor %edx,%edx
1ba: b8 01 00 00 00 mov $0x1,%eax
1bf: e8 fc ff ff ff call 1c0 <ksys_ioperm+0xa8>
1c0: R_386_PC32 atomic64_add_return_cx8
1c4: 89 03 mov %eax,(%ebx)
1c6: 89 53 04 mov %edx,0x4(%ebx)
vs. improved:
1a9: 8d 04 95 04 00 00 00 lea 0x4(,%edx,4),%eax
1b0: be 00 00 00 00 mov $0x0,%esi
1b1: R_386_32 .bss
1b5: 89 43 0c mov %eax,0xc(%ebx)
1b8: e8 fc ff ff ff call 1b9 <ksys_ioperm+0xa1>
1b9: R_386_PC32 atomic64_inc_return_cx8
1bd: 89 03 mov %eax,(%ebx)
1bf: 89 53 04 mov %edx,0x4(%ebx)
There is no need to initialize %eax/%edx register pair before the
"call" to atomic64_inc_return() function. The "call" is not an ABI
function call, but an asm volatile (which BTW lacks
ASM_CALL_CONSTRAINT), so there is no ABI guarantees which register is
call-preserved and which call-clobbered.
Oh, this is the "return" variant - the function indeed returns the
new value in %eax/%edx pair, so the difference is only in the
redundant register initialization. I can reword the commit message for
this case to mention that an initialization of register pair is spared
before the call.
Uros.
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH] x86/ioperm: Use atomic64_inc_return() in ksys_ioperm()
@ 2024-10-24 18:27 Uros Bizjak
0 siblings, 0 replies; 9+ messages in thread
From: Uros Bizjak @ 2024-10-24 18:27 UTC (permalink / raw)
To: x86, linux-kernel
Cc: Uros Bizjak, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin
Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref)
to use optimized implementation on targets that define
atomic_inc_return() and to remove now unneeded initialization of
%eax/%edx register pair before the call toatomic64_inc_return().
On x86_32 the code improves from:
1b0: b9 00 00 00 00 mov $0x0,%ecx
1b1: R_386_32 .bss
1b5: 89 43 0c mov %eax,0xc(%ebx)
1b8: 31 d2 xor %edx,%edx
1ba: b8 01 00 00 00 mov $0x1,%eax
1bf: e8 fc ff ff ff call 1c0 <ksys_ioperm+0xa8>
1c0: R_386_PC32 atomic64_add_return_cx8
1c4: 89 03 mov %eax,(%ebx)
1c6: 89 53 04 mov %edx,0x4(%ebx)
to:
1b0: be 00 00 00 00 mov $0x0,%esi
1b1: R_386_32 .bss
1b5: 89 43 0c mov %eax,0xc(%ebx)
1b8: e8 fc ff ff ff call 1b9 <ksys_ioperm+0xa1>
1b9: R_386_PC32 atomic64_inc_return_cx8
1bd: 89 03 mov %eax,(%ebx)
1bf: 89 53 04 mov %edx,0x4(%ebx)
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
v2: Mention specific code improvement on x86_32 target instead
of register pressure issue
---
arch/x86/kernel/ioport.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/ioport.c b/arch/x86/kernel/ioport.c
index e2fab3ceb09f..6290dd120f5e 100644
--- a/arch/x86/kernel/ioport.c
+++ b/arch/x86/kernel/ioport.c
@@ -144,7 +144,7 @@ long ksys_ioperm(unsigned long from, unsigned long num, int turn_on)
* Update the sequence number to force a TSS update on return to
* user mode.
*/
- iobm->sequence = atomic64_add_return(1, &io_bitmap_sequence);
+ iobm->sequence = atomic64_inc_return(&io_bitmap_sequence);
return 0;
}
--
2.42.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH] x86/ioperm: Use atomic64_inc_return() in ksys_ioperm()
2024-10-24 16:20 ` Uros Bizjak
@ 2024-10-25 17:12 ` H. Peter Anvin
2024-10-25 21:01 ` Uros Bizjak
0 siblings, 1 reply; 9+ messages in thread
From: H. Peter Anvin @ 2024-10-25 17:12 UTC (permalink / raw)
To: Uros Bizjak, Dave Hansen
Cc: x86, linux-kernel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen
On October 24, 2024 9:20:01 AM PDT, Uros Bizjak <ubizjak@gmail.com> wrote:
>On Thu, Oct 24, 2024 at 5:21 PM Dave Hansen <dave.hansen@intel.com> wrote:
>>
>> On 10/7/24 01:33, Uros Bizjak wrote:
>> > Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref)
>> > to use optimized implementation and ease register pressure around
>> > the primitive for targets that implement optimized variant.
>>
>> Ease register pressure at the end of a syscall?
>>
>> I'll accept that we're doing this just as a matter of hygiene. But it's
>> a stretch to say there are any performance concerns whatsoever at the
>> end of the ioperm() syscall.
>>
>> So what is the real reason for this patch?
>
>Please see code dumps for i386, a target that implements atomic64_inc_return():
>
> 1a9: 8d 04 95 04 00 00 00 lea 0x4(,%edx,4),%eax
> 1b0: b9 00 00 00 00 mov $0x0,%ecx
> 1b1: R_386_32 .bss
> 1b5: 89 43 0c mov %eax,0xc(%ebx)
> 1b8: 31 d2 xor %edx,%edx
> 1ba: b8 01 00 00 00 mov $0x1,%eax
> 1bf: e8 fc ff ff ff call 1c0 <ksys_ioperm+0xa8>
> 1c0: R_386_PC32 atomic64_add_return_cx8
> 1c4: 89 03 mov %eax,(%ebx)
> 1c6: 89 53 04 mov %edx,0x4(%ebx)
>
>vs. improved:
>
> 1a9: 8d 04 95 04 00 00 00 lea 0x4(,%edx,4),%eax
> 1b0: be 00 00 00 00 mov $0x0,%esi
> 1b1: R_386_32 .bss
> 1b5: 89 43 0c mov %eax,0xc(%ebx)
> 1b8: e8 fc ff ff ff call 1b9 <ksys_ioperm+0xa1>
> 1b9: R_386_PC32 atomic64_inc_return_cx8
> 1bd: 89 03 mov %eax,(%ebx)
> 1bf: 89 53 04 mov %edx,0x4(%ebx)
>
>There is no need to initialize %eax/%edx register pair before the
>"call" to atomic64_inc_return() function. The "call" is not an ABI
>function call, but an asm volatile (which BTW lacks
>ASM_CALL_CONSTRAINT), so there is no ABI guarantees which register is
>call-preserved and which call-clobbered.
>
>Oh, this is the "return" variant - the function indeed returns the
>new value in %eax/%edx pair, so the difference is only in the
>redundant register initialization. I can reword the commit message for
>this case to mention that an initialization of register pair is spared
>before the call.
>
>Uros.
>
What does ASM_CALL_CONSTRAINT actually do *in the kernel*, *for x86*? There isn't a redzone in the kernel, and there *can't* be, because asynchronous events can clobber data below the stack pointer at any time.
With FRED that is no longer true and we could use the redzone in the kernel, but such a kernel would not be able to boot on a legacy CPU/VMM, and is only applicable for 64 bits.
This by itself is a good enough reason to be good about this, to be sure, but one of the reasons I'm asking is because of older versions of gcc where "asm goto" is incompatible with output constraints.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] x86/ioperm: Use atomic64_inc_return() in ksys_ioperm()
2024-10-25 17:12 ` H. Peter Anvin
@ 2024-10-25 21:01 ` Uros Bizjak
2024-10-26 23:28 ` H. Peter Anvin
0 siblings, 1 reply; 9+ messages in thread
From: Uros Bizjak @ 2024-10-25 21:01 UTC (permalink / raw)
To: H. Peter Anvin
Cc: Dave Hansen, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen
On Fri, Oct 25, 2024 at 7:13 PM H. Peter Anvin <hpa@zytor.com> wrote:
>
> On October 24, 2024 9:20:01 AM PDT, Uros Bizjak <ubizjak@gmail.com> wrote:
> >On Thu, Oct 24, 2024 at 5:21 PM Dave Hansen <dave.hansen@intel.com> wrote:
> >>
> >> On 10/7/24 01:33, Uros Bizjak wrote:
> >> > Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref)
> >> > to use optimized implementation and ease register pressure around
> >> > the primitive for targets that implement optimized variant.
> >>
> >> Ease register pressure at the end of a syscall?
> >>
> >> I'll accept that we're doing this just as a matter of hygiene. But it's
> >> a stretch to say there are any performance concerns whatsoever at the
> >> end of the ioperm() syscall.
> >>
> >> So what is the real reason for this patch?
> >
> >Please see code dumps for i386, a target that implements atomic64_inc_return():
> >
> > 1a9: 8d 04 95 04 00 00 00 lea 0x4(,%edx,4),%eax
> > 1b0: b9 00 00 00 00 mov $0x0,%ecx
> > 1b1: R_386_32 .bss
> > 1b5: 89 43 0c mov %eax,0xc(%ebx)
> > 1b8: 31 d2 xor %edx,%edx
> > 1ba: b8 01 00 00 00 mov $0x1,%eax
> > 1bf: e8 fc ff ff ff call 1c0 <ksys_ioperm+0xa8>
> > 1c0: R_386_PC32 atomic64_add_return_cx8
> > 1c4: 89 03 mov %eax,(%ebx)
> > 1c6: 89 53 04 mov %edx,0x4(%ebx)
> >
> >vs. improved:
> >
> > 1a9: 8d 04 95 04 00 00 00 lea 0x4(,%edx,4),%eax
> > 1b0: be 00 00 00 00 mov $0x0,%esi
> > 1b1: R_386_32 .bss
> > 1b5: 89 43 0c mov %eax,0xc(%ebx)
> > 1b8: e8 fc ff ff ff call 1b9 <ksys_ioperm+0xa1>
> > 1b9: R_386_PC32 atomic64_inc_return_cx8
> > 1bd: 89 03 mov %eax,(%ebx)
> > 1bf: 89 53 04 mov %edx,0x4(%ebx)
> >
> >There is no need to initialize %eax/%edx register pair before the
> >"call" to atomic64_inc_return() function. The "call" is not an ABI
> >function call, but an asm volatile (which BTW lacks
> >ASM_CALL_CONSTRAINT), so there is no ABI guarantees which register is
> >call-preserved and which call-clobbered.
> >
> >Oh, this is the "return" variant - the function indeed returns the
> >new value in %eax/%edx pair, so the difference is only in the
> >redundant register initialization. I can reword the commit message for
> >this case to mention that an initialization of register pair is spared
> >before the call.
> >
> >Uros.
> >
>
> What does ASM_CALL_CONSTRAINT actually do *in the kernel*, *for x86*? There isn't a redzone in the kernel, and there *can't* be, because asynchronous events can clobber data below the stack pointer at any time.
The reason for ASM_CALL_CONSTRAINT is explained in arch/x86/include/asm/asm.h:
--q--
/*
* This output constraint should be used for any inline asm which has a "call"
* instruction. Otherwise the asm may be inserted before the frame pointer
* gets set up by the containing function. If you forget to do this, objtool
* may print a "call without frame pointer save/setup" warning.
*/
register unsigned long current_stack_pointer asm(_ASM_SP);
#define ASM_CALL_CONSTRAINT "+r" (current_stack_pointer)
--/q--
__alternative_atomic64() macro always uses CALL instruction and one of
alternatives in __arch_{,try_}cmpxchg64_emu() uses CALL as well, so
according to the above comment, they all qualify for
ASM_CALL_CONSTRAINT. This constraint is added to the mentioned macros
in the proposed series [1].
[1] https://lore.kernel.org/lkml/20241024180612.162045-1-ubizjak@gmail.com/
Uros.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] x86/ioperm: Use atomic64_inc_return() in ksys_ioperm()
2024-10-25 21:01 ` Uros Bizjak
@ 2024-10-26 23:28 ` H. Peter Anvin
2024-10-26 23:38 ` H. Peter Anvin
0 siblings, 1 reply; 9+ messages in thread
From: H. Peter Anvin @ 2024-10-26 23:28 UTC (permalink / raw)
To: Uros Bizjak
Cc: Dave Hansen, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen
On 10/25/24 14:01, Uros Bizjak wrote:
>>
>> What does ASM_CALL_CONSTRAINT actually do *in the kernel*, *for x86*? There isn't a redzone in the kernel, and there *can't* be, because asynchronous events can clobber data below the stack pointer at any time.
>
> The reason for ASM_CALL_CONSTRAINT is explained in arch/x86/include/asm/asm.h:
>
> --q--
> /*
> * This output constraint should be used for any inline asm which has a "call"
> * instruction. Otherwise the asm may be inserted before the frame pointer
> * gets set up by the containing function. If you forget to do this, objtool
> * may print a "call without frame pointer save/setup" warning.
> */
> register unsigned long current_stack_pointer asm(_ASM_SP);
> #define ASM_CALL_CONSTRAINT "+r" (current_stack_pointer)
> --/q--
>
> __alternative_atomic64() macro always uses CALL instruction and one of
> alternatives in __arch_{,try_}cmpxchg64_emu() uses CALL as well, so
> according to the above comment, they all qualify for
> ASM_CALL_CONSTRAINT. This constraint is added to the mentioned macros
> in the proposed series [1].
>
> [1] https://lore.kernel.org/lkml/20241024180612.162045-1-ubizjak@gmail.com/
>
Ugh. I am not criticizing the usage here, but the construct is
bleacherous, because it converts what is properly an input constraint
into an inout constraint, which wouldn't be a big deal except for the
slight fact that older compilers don't allow asm goto to have output
constraints.
By any sane definition, the constraint should actually be an input
constraint on the frame pointer itself; something like:
#define ASM_CALL_CONSTRAINT "r" (__builtin_frame_address(0))
... except that "r" really should be a %rbp constraint, but %rbp doesn't
seem to have a constraint letter. At least gcc 14.2 seems to do the
right thing anyway, though: __builtin_frame_address(0) seems to force a
frame pointer to have been created (even with -fomit-frame-pointer
specified, and in a leaf function), and the value is always passed in
%rbp (because why on Earth would it do it differently, when it is
sitting right there?)
-hpa
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] x86/ioperm: Use atomic64_inc_return() in ksys_ioperm()
2024-10-26 23:28 ` H. Peter Anvin
@ 2024-10-26 23:38 ` H. Peter Anvin
2024-10-27 1:23 ` H. Peter Anvin
0 siblings, 1 reply; 9+ messages in thread
From: H. Peter Anvin @ 2024-10-26 23:38 UTC (permalink / raw)
To: Uros Bizjak
Cc: Dave Hansen, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen
[-- Attachment #1: Type: text/plain, Size: 1188 bytes --]
On 10/26/24 16:28, H. Peter Anvin wrote:
>
> By any sane definition, the constraint should actually be an input
> constraint on the frame pointer itself; something like:
>
> #define ASM_CALL_CONSTRAINT "r" (__builtin_frame_address(0))
>
> ... except that "r" really should be a %rbp constraint, but %rbp doesn't
> seem to have a constraint letter. At least gcc 14.2 seems to do the
> right thing anyway, though: __builtin_frame_address(0) seems to force a
> frame pointer to have been created (even with -fomit-frame-pointer
> specified, and in a leaf function), and the value is always passed in
> %rbp (because why on Earth would it do it differently, when it is
> sitting right there?)
>
cl
This also matches the "tell the compiler [and programmer] what we
actually mean" issue that you have mentioned in other contexts.
Anyway, here is a simple test case that can be used to verify that this
construct does indeed work; at least with gcc 14.2.1 and clang 18.1.8
(the ones I ran a very quick test on).
It's simple enough that it is pretty straightforward to mess around with
various modifications. So far I haven't been able to trip up the
compilers this way.
-hpa
[-- Attachment #2: fp.c --]
[-- Type: text/x-csrc, Size: 541 bytes --]
unsigned long testit_reg(unsigned long x, unsigned long y)
{
unsigned long z = x + y;
unsigned long v;
asm("# Frame pointer in %[fp]\n\tmov %[in],%[out]"
: [out] "=r" (v)
: [in] "r" (z), [fp] "r" (__builtin_frame_address(0)));
return v;
}
unsigned long testit_buf(unsigned long x, unsigned long y)
{
unsigned long z = x + y;
unsigned long buffer[64];
asm("# Frame pointer in %[fp]\n\tmov %[in],%[out]"
: [out] "=m" (buffer)
: [in] "r" (z), [fp] "r" (__builtin_frame_address(0)));
return buffer[0];
}
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] x86/ioperm: Use atomic64_inc_return() in ksys_ioperm()
2024-10-26 23:38 ` H. Peter Anvin
@ 2024-10-27 1:23 ` H. Peter Anvin
0 siblings, 0 replies; 9+ messages in thread
From: H. Peter Anvin @ 2024-10-27 1:23 UTC (permalink / raw)
To: Uros Bizjak
Cc: Dave Hansen, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen
On 10/26/24 16:38, H. Peter Anvin wrote:
> On 10/26/24 16:28, H. Peter Anvin wrote:
>>
>> By any sane definition, the constraint should actually be an input
>> constraint on the frame pointer itself; something like:
>>
>> #define ASM_CALL_CONSTRAINT "r" (__builtin_frame_address(0))
>>
>> ... except that "r" really should be a %rbp constraint, but %rbp
>> doesn't seem to have a constraint letter. At least gcc 14.2 seems to
>> do the right thing anyway, though: __builtin_frame_address(0) seems to
>> force a frame pointer to have been created (even with -fomit-frame-
>> pointer specified, and in a leaf function), and the value is always
>> passed in %rbp (because why on Earth would it do it differently, when
>> it is sitting right there?)
>>
> cl
> This also matches the "tell the compiler [and programmer] what we
> actually mean" issue that you have mentioned in other contexts.
>
> Anyway, here is a simple test case that can be used to verify that this
> construct does indeed work; at least with gcc 14.2.1 and clang 18.1.8
> (the ones I ran a very quick test on).
>
> It's simple enough that it is pretty straightforward to mess around with
> various modifications. So far I haven't been able to trip up the
> compilers this way.
>
I filed a gcc bug report asking to clarify the documentation to
explicitly support this use case:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=117311
-hpa
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-10-27 1:24 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-24 18:27 [PATCH] x86/ioperm: Use atomic64_inc_return() in ksys_ioperm() Uros Bizjak
-- strict thread matches above, loose matches on Subject: below --
2024-10-07 8:33 Uros Bizjak
2024-10-24 15:20 ` Dave Hansen
2024-10-24 16:20 ` Uros Bizjak
2024-10-25 17:12 ` H. Peter Anvin
2024-10-25 21:01 ` Uros Bizjak
2024-10-26 23:28 ` H. Peter Anvin
2024-10-26 23:38 ` H. Peter Anvin
2024-10-27 1:23 ` H. Peter Anvin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox