* {PATCH] accel/tcg: Fix CPU specific unaligned behaviour
@ 2024-10-02 2:37 Helge Deller
2024-10-02 15:35 ` Alex Bennée
0 siblings, 1 reply; 8+ messages in thread
From: Helge Deller @ 2024-10-02 2:37 UTC (permalink / raw)
To: Richard Henderson, qemu-devel; +Cc: linux-parisc
When the emulated CPU reads or writes to a memory location
a) for which no read/write permissions exists, *and*
b) the access happens unaligned (non-natural alignment),
then the CPU should either
- trigger a permission fault, or
- trigger an unalign access fault.
In the current code the alignment check happens before the memory
permission checks, so only unalignment faults will be triggered.
This behaviour breaks the emulation of the PARISC architecture, where the CPU
does a memory verification first. The behaviour can be tested with the testcase
from the bugzilla report.
Add the necessary code to allow PARISC and possibly other architectures to
trigger a memory fault instead.
Signed-off-by: Helge Deller <deller@gmx.de>
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=219339
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 117b516739..dd1da358fb 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1684,6 +1684,26 @@ static void mmu_watch_or_dirty(CPUState *cpu, MMULookupPageData *data,
data->flags = flags;
}
+/* when accessing unreadable memory unaligned, will the CPU issue
+ * a alignment trap or a memory access trap ? */
+#ifdef TARGET_HPPA
+# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 1
+#else
+# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 0
+#endif
+
+static void mmu_check_alignment(CPUState *cpu, vaddr addr,
+ uintptr_t ra, MMUAccessType type, MMULookupLocals *l)
+{
+ unsigned a_bits;
+
+ /* Handle CPU specific unaligned behaviour */
+ a_bits = get_alignment_bits(l->memop);
+ if (addr & ((1 << a_bits) - 1)) {
+ cpu_unaligned_access(cpu, addr, type, l->mmu_idx, ra);
+ }
+}
+
/**
* mmu_lookup: translate page(s)
* @cpu: generic cpu state
@@ -1699,7 +1719,6 @@ static void mmu_watch_or_dirty(CPUState *cpu, MMULookupPageData *data,
static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
uintptr_t ra, MMUAccessType type, MMULookupLocals *l)
{
- unsigned a_bits;
bool crosspage;
int flags;
@@ -1708,10 +1727,8 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
tcg_debug_assert(l->mmu_idx < NB_MMU_MODES);
- /* Handle CPU specific unaligned behaviour */
- a_bits = get_alignment_bits(l->memop);
- if (addr & ((1 << a_bits) - 1)) {
- cpu_unaligned_access(cpu, addr, type, l->mmu_idx, ra);
+ if (!CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK) {
+ mmu_check_alignment(cpu, addr, ra, type, l);
}
l->page[0].addr = addr;
@@ -1760,6 +1777,10 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
tcg_debug_assert((flags & TLB_BSWAP) == 0);
}
+ if (CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK) {
+ mmu_check_alignment(cpu, addr, ra, type, l);
+ }
+
/*
* This alignment check differs from the one above, in that this is
* based on the atomicity of the operation. The intended use case is
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: {PATCH] accel/tcg: Fix CPU specific unaligned behaviour
2024-10-02 2:37 {PATCH] accel/tcg: Fix CPU specific unaligned behaviour Helge Deller
@ 2024-10-02 15:35 ` Alex Bennée
2024-10-02 15:47 ` Peter Maydell
2024-10-04 14:24 ` Richard Henderson
0 siblings, 2 replies; 8+ messages in thread
From: Alex Bennée @ 2024-10-02 15:35 UTC (permalink / raw)
To: Helge Deller; +Cc: Richard Henderson, qemu-devel, linux-parisc
Helge Deller <deller@kernel.org> writes:
> When the emulated CPU reads or writes to a memory location
> a) for which no read/write permissions exists, *and*
> b) the access happens unaligned (non-natural alignment),
> then the CPU should either
> - trigger a permission fault, or
> - trigger an unalign access fault.
>
> In the current code the alignment check happens before the memory
> permission checks, so only unalignment faults will be triggered.
>
> This behaviour breaks the emulation of the PARISC architecture, where the CPU
> does a memory verification first. The behaviour can be tested with the testcase
> from the bugzilla report.
>
> Add the necessary code to allow PARISC and possibly other architectures to
> trigger a memory fault instead.
>
> Signed-off-by: Helge Deller <deller@gmx.de>
> Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=219339
>
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 117b516739..dd1da358fb 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1684,6 +1684,26 @@ static void mmu_watch_or_dirty(CPUState *cpu, MMULookupPageData *data,
> data->flags = flags;
> }
>
> +/* when accessing unreadable memory unaligned, will the CPU issue
> + * a alignment trap or a memory access trap ? */
> +#ifdef TARGET_HPPA
> +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 1
> +#else
> +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 0
> +#endif
I'm pretty certain we don't want to be introducing per-guest hacks into
the core cputlb.c code when we are aiming to make it a compile once
object.
I guess the real question is where could we put this flag? My gut says
we should expand the MO_ALIGN bits in MemOp to express the precedence or
not of the alignment check in relation to permissions.
> +
> +static void mmu_check_alignment(CPUState *cpu, vaddr addr,
> + uintptr_t ra, MMUAccessType type, MMULookupLocals *l)
> +{
> + unsigned a_bits;
> +
> + /* Handle CPU specific unaligned behaviour */
> + a_bits = get_alignment_bits(l->memop);
> + if (addr & ((1 << a_bits) - 1)) {
> + cpu_unaligned_access(cpu, addr, type, l->mmu_idx, ra);
> + }
> +}
> +
> /**
> * mmu_lookup: translate page(s)
> * @cpu: generic cpu state
> @@ -1699,7 +1719,6 @@ static void mmu_watch_or_dirty(CPUState *cpu, MMULookupPageData *data,
> static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> uintptr_t ra, MMUAccessType type, MMULookupLocals *l)
> {
> - unsigned a_bits;
> bool crosspage;
> int flags;
>
> @@ -1708,10 +1727,8 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
>
> tcg_debug_assert(l->mmu_idx < NB_MMU_MODES);
>
> - /* Handle CPU specific unaligned behaviour */
> - a_bits = get_alignment_bits(l->memop);
> - if (addr & ((1 << a_bits) - 1)) {
> - cpu_unaligned_access(cpu, addr, type, l->mmu_idx, ra);
> + if (!CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK) {
Then this would be something like:
if (!(memop & MO_ALIGN_PP)) or something
> + mmu_check_alignment(cpu, addr, ra, type, l);
> }
>
> l->page[0].addr = addr;
> @@ -1760,6 +1777,10 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> tcg_debug_assert((flags & TLB_BSWAP) == 0);
> }
>
> + if (CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK) {
> + mmu_check_alignment(cpu, addr, ra, type, l);
> + }
> +
> /*
> * This alignment check differs from the one above, in that this is
> * based on the atomicity of the operation. The intended use case is
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: {PATCH] accel/tcg: Fix CPU specific unaligned behaviour
2024-10-02 15:35 ` Alex Bennée
@ 2024-10-02 15:47 ` Peter Maydell
2024-10-02 17:25 ` Alex Bennée
` (2 more replies)
2024-10-04 14:24 ` Richard Henderson
1 sibling, 3 replies; 8+ messages in thread
From: Peter Maydell @ 2024-10-02 15:47 UTC (permalink / raw)
To: Alex Bennée
Cc: Helge Deller, Richard Henderson, qemu-devel, linux-parisc
On Wed, 2 Oct 2024 at 16:35, Alex Bennée <alex.bennee@linaro.org> wrote:
>
> Helge Deller <deller@kernel.org> writes:
>
> > When the emulated CPU reads or writes to a memory location
> > a) for which no read/write permissions exists, *and*
> > b) the access happens unaligned (non-natural alignment),
> > then the CPU should either
> > - trigger a permission fault, or
> > - trigger an unalign access fault.
> >
> > In the current code the alignment check happens before the memory
> > permission checks, so only unalignment faults will be triggered.
> >
> > This behaviour breaks the emulation of the PARISC architecture, where the CPU
> > does a memory verification first. The behaviour can be tested with the testcase
> > from the bugzilla report.
> >
> > Add the necessary code to allow PARISC and possibly other architectures to
> > trigger a memory fault instead.
> >
> > Signed-off-by: Helge Deller <deller@gmx.de>
> > Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=219339
> >
> >
> > diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> > index 117b516739..dd1da358fb 100644
> > --- a/accel/tcg/cputlb.c
> > +++ b/accel/tcg/cputlb.c
> > @@ -1684,6 +1684,26 @@ static void mmu_watch_or_dirty(CPUState *cpu, MMULookupPageData *data,
> > data->flags = flags;
> > }
> >
> > +/* when accessing unreadable memory unaligned, will the CPU issue
> > + * a alignment trap or a memory access trap ? */
> > +#ifdef TARGET_HPPA
> > +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 1
> > +#else
> > +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 0
> > +#endif
>
> I'm pretty certain we don't want to be introducing per-guest hacks into
> the core cputlb.c code when we are aiming to make it a compile once
> object.
There's also something curious going on here -- this patch
says "we check alignment before permissions, and that's wrong
on PARISC". But there's a comment in target/arm/ptw.c that
says "we check permissions before alignment, and that's
wrong on Arm":
* Enable alignment checks on Device memory.
*
* Per R_XCHFJ, this check is mis-ordered. The correct ordering
* for alignment, permission, and stage 2 faults should be:
* - Alignment fault caused by the memory type
* - Permission fault
* - A stage 2 fault on the memory access
* but due to the way the TCG softmmu TLB operates, we will have
* implicitly done the permission check and the stage2 lookup in
* finding the TLB entry, so the alignment check cannot be done sooner.
So do we check alignment first, or permissions first, or does
the order vary depending on what we're doing?
-- PMM
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: {PATCH] accel/tcg: Fix CPU specific unaligned behaviour
2024-10-02 15:47 ` Peter Maydell
@ 2024-10-02 17:25 ` Alex Bennée
2024-10-02 19:38 ` Helge Deller
2024-10-03 23:08 ` Richard Henderson
2 siblings, 0 replies; 8+ messages in thread
From: Alex Bennée @ 2024-10-02 17:25 UTC (permalink / raw)
To: Peter Maydell; +Cc: Helge Deller, Richard Henderson, qemu-devel, linux-parisc
Peter Maydell <peter.maydell@linaro.org> writes:
> On Wed, 2 Oct 2024 at 16:35, Alex Bennée <alex.bennee@linaro.org> wrote:
>>
>> Helge Deller <deller@kernel.org> writes:
>>
>> > When the emulated CPU reads or writes to a memory location
>> > a) for which no read/write permissions exists, *and*
>> > b) the access happens unaligned (non-natural alignment),
>> > then the CPU should either
>> > - trigger a permission fault, or
>> > - trigger an unalign access fault.
>> >
>> > In the current code the alignment check happens before the memory
>> > permission checks, so only unalignment faults will be triggered.
>> >
>> > This behaviour breaks the emulation of the PARISC architecture, where the CPU
>> > does a memory verification first. The behaviour can be tested with the testcase
>> > from the bugzilla report.
>> >
>> > Add the necessary code to allow PARISC and possibly other architectures to
>> > trigger a memory fault instead.
>> >
>> > Signed-off-by: Helge Deller <deller@gmx.de>
>> > Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=219339
>> >
>> >
>> > diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
>> > index 117b516739..dd1da358fb 100644
>> > --- a/accel/tcg/cputlb.c
>> > +++ b/accel/tcg/cputlb.c
>> > @@ -1684,6 +1684,26 @@ static void mmu_watch_or_dirty(CPUState *cpu, MMULookupPageData *data,
>> > data->flags = flags;
>> > }
>> >
>> > +/* when accessing unreadable memory unaligned, will the CPU issue
>> > + * a alignment trap or a memory access trap ? */
>> > +#ifdef TARGET_HPPA
>> > +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 1
>> > +#else
>> > +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 0
>> > +#endif
>>
>> I'm pretty certain we don't want to be introducing per-guest hacks into
>> the core cputlb.c code when we are aiming to make it a compile once
>> object.
>
> There's also something curious going on here -- this patch
> says "we check alignment before permissions, and that's wrong
> on PARISC". But there's a comment in target/arm/ptw.c that
> says "we check permissions before alignment, and that's
> wrong on Arm":
>
> * Enable alignment checks on Device memory.
> *
> * Per R_XCHFJ, this check is mis-ordered. The correct ordering
> * for alignment, permission, and stage 2 faults should be:
> * - Alignment fault caused by the memory type
> * - Permission fault
> * - A stage 2 fault on the memory access
> * but due to the way the TCG softmmu TLB operates, we will have
> * implicitly done the permission check and the stage2 lookup in
> * finding the TLB entry, so the alignment check cannot be done sooner.
>
> So do we check alignment first, or permissions first, or does
> the order vary depending on what we're doing?
If it varies by architecture and operation that is even more reason to
encode the wanted behaviour in the MemOp.
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: {PATCH] accel/tcg: Fix CPU specific unaligned behaviour
2024-10-02 15:47 ` Peter Maydell
2024-10-02 17:25 ` Alex Bennée
@ 2024-10-02 19:38 ` Helge Deller
2024-10-03 23:08 ` Richard Henderson
2 siblings, 0 replies; 8+ messages in thread
From: Helge Deller @ 2024-10-02 19:38 UTC (permalink / raw)
To: Peter Maydell, Alex Bennée
Cc: Helge Deller, Richard Henderson, qemu-devel, linux-parisc
On 10/2/24 17:47, Peter Maydell wrote:
> On Wed, 2 Oct 2024 at 16:35, Alex Bennée <alex.bennee@linaro.org> wrote:
>>
>> Helge Deller <deller@kernel.org> writes:
>>
>>> When the emulated CPU reads or writes to a memory location
>>> a) for which no read/write permissions exists, *and*
>>> b) the access happens unaligned (non-natural alignment),
>>> then the CPU should either
>>> - trigger a permission fault, or
>>> - trigger an unalign access fault.
>>>
>>> In the current code the alignment check happens before the memory
>>> permission checks, so only unalignment faults will be triggered.
>>>
>>> This behaviour breaks the emulation of the PARISC architecture, where the CPU
>>> does a memory verification first. The behaviour can be tested with the testcase
>>> from the bugzilla report.
>>>
>>> Add the necessary code to allow PARISC and possibly other architectures to
>>> trigger a memory fault instead.
>>>
>>> Signed-off-by: Helge Deller <deller@gmx.de>
>>> Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=219339
>>>
>>>
>>> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
>>> index 117b516739..dd1da358fb 100644
>>> --- a/accel/tcg/cputlb.c
>>> +++ b/accel/tcg/cputlb.c
>>> @@ -1684,6 +1684,26 @@ static void mmu_watch_or_dirty(CPUState *cpu, MMULookupPageData *data,
>>> data->flags = flags;
>>> }
>>>
>>> +/* when accessing unreadable memory unaligned, will the CPU issue
>>> + * a alignment trap or a memory access trap ? */
>>> +#ifdef TARGET_HPPA
>>> +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 1
>>> +#else
>>> +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 0
>>> +#endif
>>
>> I'm pretty certain we don't want to be introducing per-guest hacks into
>> the core cputlb.c code when we are aiming to make it a compile once
>> object.
Ah, I didn't know.
I'm fine with either implementation people agree on and
which gets accepted in the end.
> There's also something curious going on here -- this patch
> says "we check alignment before permissions, and that's wrong
> on PARISC".
Yes, that's correct.
The first hunk of code in mmu_lookup() which I remove is the first
alignment check.
Then the memory access permissions are checked.
> But there's a comment in target/arm/ptw.c that
> says "we check permissions before alignment, and that's
> wrong on Arm":
>
> * Enable alignment checks on Device memory.
> *
> * Per R_XCHFJ, this check is mis-ordered. The correct ordering
> * for alignment, permission, and stage 2 faults should be:
> * - Alignment fault caused by the memory type
> * - Permission fault
> * - A stage 2 fault on the memory access
> * but due to the way the TCG softmmu TLB operates, we will have
> * implicitly done the permission check and the stage2 lookup in
> * finding the TLB entry, so the alignment check cannot be done sooner.
I'm no arm expert. Note there is a second ARM-related aligment check
in that function. See TLB_CHECK_ALIGNED.
> So do we check alignment first, or permissions first, or does
> the order vary depending on what we're doing?
currently alignment first.
Helge
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: {PATCH] accel/tcg: Fix CPU specific unaligned behaviour
2024-10-02 15:47 ` Peter Maydell
2024-10-02 17:25 ` Alex Bennée
2024-10-02 19:38 ` Helge Deller
@ 2024-10-03 23:08 ` Richard Henderson
2 siblings, 0 replies; 8+ messages in thread
From: Richard Henderson @ 2024-10-03 23:08 UTC (permalink / raw)
To: Peter Maydell, Alex Bennée; +Cc: Helge Deller, qemu-devel, linux-parisc
On 10/2/24 08:47, Peter Maydell wrote:
> There's also something curious going on here -- this patch
> says "we check alignment before permissions, and that's wrong
> on PARISC". But there's a comment in target/arm/ptw.c that
> says "we check permissions before alignment, and that's
> wrong on Arm":
>
> * Enable alignment checks on Device memory.
> *
> * Per R_XCHFJ, this check is mis-ordered. The correct ordering
> * for alignment, permission, and stage 2 faults should be:
> * - Alignment fault caused by the memory type
> * - Permission fault
> * - A stage 2 fault on the memory access
> * but due to the way the TCG softmmu TLB operates, we will have
> * implicitly done the permission check and the stage2 lookup in
> * finding the TLB entry, so the alignment check cannot be done sooner.
>
> So do we check alignment first, or permissions first, or does
> the order vary depending on what we're doing?
There are two different alignment fault checks. The one for 'alignment fault caused by
memory type' is later, after we verify that the TLB entry is for the correct page, which
implicitly tests r/w permissions.
r~
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: {PATCH] accel/tcg: Fix CPU specific unaligned behaviour
2024-10-02 15:35 ` Alex Bennée
2024-10-02 15:47 ` Peter Maydell
@ 2024-10-04 14:24 ` Richard Henderson
2024-10-05 16:55 ` Richard Henderson
1 sibling, 1 reply; 8+ messages in thread
From: Richard Henderson @ 2024-10-04 14:24 UTC (permalink / raw)
To: Alex Bennée, Helge Deller; +Cc: qemu-devel, linux-parisc
On 10/2/24 08:35, Alex Bennée wrote:
> Helge Deller <deller@kernel.org> writes:
>
>> When the emulated CPU reads or writes to a memory location
>> a) for which no read/write permissions exists, *and*
>> b) the access happens unaligned (non-natural alignment),
>> then the CPU should either
>> - trigger a permission fault, or
>> - trigger an unalign access fault.
>>
>> In the current code the alignment check happens before the memory
>> permission checks, so only unalignment faults will be triggered.
>>
>> This behaviour breaks the emulation of the PARISC architecture, where the CPU
>> does a memory verification first. The behaviour can be tested with the testcase
>> from the bugzilla report.
>>
>> Add the necessary code to allow PARISC and possibly other architectures to
>> trigger a memory fault instead.
>>
>> Signed-off-by: Helge Deller <deller@gmx.de>
>> Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=219339
>>
>>
>> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
>> index 117b516739..dd1da358fb 100644
>> --- a/accel/tcg/cputlb.c
>> +++ b/accel/tcg/cputlb.c
>> @@ -1684,6 +1684,26 @@ static void mmu_watch_or_dirty(CPUState *cpu, MMULookupPageData *data,
>> data->flags = flags;
>> }
>>
>> +/* when accessing unreadable memory unaligned, will the CPU issue
>> + * a alignment trap or a memory access trap ? */
>> +#ifdef TARGET_HPPA
>> +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 1
>> +#else
>> +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 0
>> +#endif
>
> I'm pretty certain we don't want to be introducing per-guest hacks into
> the core cputlb.c code when we are aiming to make it a compile once
> object.
Correct.
> I guess the real question is where could we put this flag? My gut says
> we should expand the MO_ALIGN bits in MemOp to express the precedence or
> not of the alignment check in relation to permissions.
I suppose that could work.
I was hoping for a reorg of the target hooks that could allow the target to see
misalignment and permission check simultaneously, then the target chooses the order in
which the two faults are presented. Given how complicated tlb_fill is though, I don't see
that being an easy job.
I'll play around with something and report back.
r~
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: {PATCH] accel/tcg: Fix CPU specific unaligned behaviour
2024-10-04 14:24 ` Richard Henderson
@ 2024-10-05 16:55 ` Richard Henderson
0 siblings, 0 replies; 8+ messages in thread
From: Richard Henderson @ 2024-10-05 16:55 UTC (permalink / raw)
To: Alex Bennée, Helge Deller; +Cc: qemu-devel, linux-parisc
On 10/4/24 07:24, Richard Henderson wrote:
> I was hoping for a reorg of the target hooks that could allow the target to see
> misalignment and permission check simultaneously, then the target chooses the order in
> which the two faults are presented. Given how complicated tlb_fill is though, I don't see
> that being an easy job.
>
> I'll play around with something and report back.
Careful review of exception priorities shows that this cannot be done without expanding
tlb_fill, because of how the alignment fault must be prioritized in between other faults.
r~
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-10-05 16:56 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-02 2:37 {PATCH] accel/tcg: Fix CPU specific unaligned behaviour Helge Deller
2024-10-02 15:35 ` Alex Bennée
2024-10-02 15:47 ` Peter Maydell
2024-10-02 17:25 ` Alex Bennée
2024-10-02 19:38 ` Helge Deller
2024-10-03 23:08 ` Richard Henderson
2024-10-04 14:24 ` Richard Henderson
2024-10-05 16:55 ` Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).