* [PATCH] softmmu: Always initialize xlat in address_space_translate_for_iotlb
@ 2022-06-15 16:38 Richard Henderson
2022-06-20 12:52 ` Peter Maydell
0 siblings, 1 reply; 4+ messages in thread
From: Richard Henderson @ 2022-06-15 16:38 UTC (permalink / raw)
To: qemu-devel
The bug is an uninitialized memory read, along the translate_fail
path, which results in garbage being read from iotlb_to_section,
which can lead to a crash in io_readx/io_writex.
The bug may be fixed by writing any value with zero
in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
the xlat'ed address returns io_mem_unassigned, as desired by the
translate_fail path.
It is most useful to record the original physical page address,
which will eventually be logged by memory_region_access_valid
when the access is rejected by unassigned_mem_accepts.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
softmmu/physmem.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 657841eed0..fb0f0709b5 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -681,6 +681,9 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
AddressSpaceDispatch *d =
qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
+ /* Record the original phys page for use by the translate_fail path. */
+ *xlat = addr;
+
for (;;) {
section = address_space_translate_internal(d, addr, &addr, plen, false);
--
2.34.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] softmmu: Always initialize xlat in address_space_translate_for_iotlb
2022-06-15 16:38 [PATCH] softmmu: Always initialize xlat in address_space_translate_for_iotlb Richard Henderson
@ 2022-06-20 12:52 ` Peter Maydell
2022-06-20 16:53 ` Richard Henderson
0 siblings, 1 reply; 4+ messages in thread
From: Peter Maydell @ 2022-06-20 12:52 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel
On Wed, 15 Jun 2022 at 17:43, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The bug is an uninitialized memory read, along the translate_fail
> path, which results in garbage being read from iotlb_to_section,
> which can lead to a crash in io_readx/io_writex.
>
> The bug may be fixed by writing any value with zero
> in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
> the xlat'ed address returns io_mem_unassigned, as desired by the
> translate_fail path.
>
> It is most useful to record the original physical page address,
> which will eventually be logged by memory_region_access_valid
> when the access is rejected by unassigned_mem_accepts.
>
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> softmmu/physmem.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index 657841eed0..fb0f0709b5 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
> @@ -681,6 +681,9 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
> AddressSpaceDispatch *d =
> qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
>
> + /* Record the original phys page for use by the translate_fail path. */
> + *xlat = addr;
There's no doc comment for address_space_translate_for_iotlb(),
so there's nothing that says explicitly that addr is obliged
to be page aligned, although it happens that its only caller
does pass a page-aligned address. Were we already implicitly
requiring a page-aligned address here, or does not masking
addr before assigning to *xlat impose a new requirement ?
thanks
-- PMM
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] softmmu: Always initialize xlat in address_space_translate_for_iotlb
2022-06-20 12:52 ` Peter Maydell
@ 2022-06-20 16:53 ` Richard Henderson
2022-06-21 15:06 ` Peter Maydell
0 siblings, 1 reply; 4+ messages in thread
From: Richard Henderson @ 2022-06-20 16:53 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel
On 6/20/22 05:52, Peter Maydell wrote:
> On Wed, 15 Jun 2022 at 17:43, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> The bug is an uninitialized memory read, along the translate_fail
>> path, which results in garbage being read from iotlb_to_section,
>> which can lead to a crash in io_readx/io_writex.
>>
>> The bug may be fixed by writing any value with zero
>> in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
>> the xlat'ed address returns io_mem_unassigned, as desired by the
>> translate_fail path.
>>
>> It is most useful to record the original physical page address,
>> which will eventually be logged by memory_region_access_valid
>> when the access is rejected by unassigned_mem_accepts.
>>
>> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>> softmmu/physmem.c | 3 +++
>> 1 file changed, 3 insertions(+)
>>
>> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
>> index 657841eed0..fb0f0709b5 100644
>> --- a/softmmu/physmem.c
>> +++ b/softmmu/physmem.c
>> @@ -681,6 +681,9 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
>> AddressSpaceDispatch *d =
>> qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
>>
>> + /* Record the original phys page for use by the translate_fail path. */
>> + *xlat = addr;
>
> There's no doc comment for address_space_translate_for_iotlb(),
> so there's nothing that says explicitly that addr is obliged
> to be page aligned, although it happens that its only caller
> does pass a page-aligned address. Were we already implicitly
> requiring a page-aligned address here, or does not masking
> addr before assigning to *xlat impose a new requirement ?
I have no idea. The whole lookup process is both undocumented and twistedly complex. I'm
willing to add an extra masking operation here, if it seems necessary?
r~
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] softmmu: Always initialize xlat in address_space_translate_for_iotlb
2022-06-20 16:53 ` Richard Henderson
@ 2022-06-21 15:06 ` Peter Maydell
0 siblings, 0 replies; 4+ messages in thread
From: Peter Maydell @ 2022-06-21 15:06 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel
On Mon, 20 Jun 2022 at 17:54, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 6/20/22 05:52, Peter Maydell wrote:
> > On Wed, 15 Jun 2022 at 17:43, Richard Henderson
> > <richard.henderson@linaro.org> wrote:
> >>
> >> The bug is an uninitialized memory read, along the translate_fail
> >> path, which results in garbage being read from iotlb_to_section,
> >> which can lead to a crash in io_readx/io_writex.
> >>
> >> The bug may be fixed by writing any value with zero
> >> in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
> >> the xlat'ed address returns io_mem_unassigned, as desired by the
> >> translate_fail path.
> >>
> >> It is most useful to record the original physical page address,
> >> which will eventually be logged by memory_region_access_valid
> >> when the access is rejected by unassigned_mem_accepts.
> >>
> >> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
> >> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> >> ---
> >> softmmu/physmem.c | 3 +++
> >> 1 file changed, 3 insertions(+)
> >>
> >> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> >> index 657841eed0..fb0f0709b5 100644
> >> --- a/softmmu/physmem.c
> >> +++ b/softmmu/physmem.c
> >> @@ -681,6 +681,9 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
> >> AddressSpaceDispatch *d =
> >> qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
> >>
> >> + /* Record the original phys page for use by the translate_fail path. */
> >> + *xlat = addr;
> >
> > There's no doc comment for address_space_translate_for_iotlb(),
> > so there's nothing that says explicitly that addr is obliged
> > to be page aligned, although it happens that its only caller
> > does pass a page-aligned address. Were we already implicitly
> > requiring a page-aligned address here, or does not masking
> > addr before assigning to *xlat impose a new requirement ?
>
> I have no idea. The whole lookup process is both undocumented and twistedly complex. I'm
> willing to add an extra masking operation here, if it seems necessary?
I think we should do one of:
* document that we assume the address is page-aligned
* assert that the address is page-aligned
* mask to force it to page-alignedness
but I much don't care which one of those we do. Maybe we should
assert((*xlat & ~TARGET_PAGE_MASK) == 0) at the translate_fail
label, with a suitable comment ?
thanks
-- PMM
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2022-06-21 15:10 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-06-15 16:38 [PATCH] softmmu: Always initialize xlat in address_space_translate_for_iotlb Richard Henderson
2022-06-20 12:52 ` Peter Maydell
2022-06-20 16:53 ` Richard Henderson
2022-06-21 15:06 ` Peter Maydell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).