qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Page alignment & memory regions expectations
@ 2022-08-24 12:43 Marc-André Lureau
  2022-08-24 16:42 ` David Hildenbrand
  0 siblings, 1 reply; 7+ messages in thread
From: Marc-André Lureau @ 2022-08-24 12:43 UTC (permalink / raw)
  To: QEMU; +Cc: Paolo Bonzini, Stefan Berger, David Hildenbrand, qiaonuohan

[-- Attachment #1: Type: text/plain, Size: 1255 bytes --]

Hi,

tpm-crb creates a "tpm-crb-cmd" RAM memory region that is not page aligned.
Apparently, this is not a problem for QEMU in general. However, it crashes
kdump'ing in dump.c:get_next_page, as it expects GuestPhysBlock to be
page-aligned. (see also bug
https://bugzilla.redhat.com/show_bug.cgi?id=2120480)

Here is some relevant DEBUG_GUEST_PHYS_REGION_ADD log:
guest_phys_block_add_section: target_start=00000000fd000000
target_end=00000000fe000000: added (count: 3)
guest_phys_block_add_section: target_start=00000000fed40080
target_end=00000000fed41000: added (count: 4)
guest_phys_block_add_section: target_start=00000000fffc0000
target_end=0000000100000000: added (count: 5)

I am looking for ideas on how to solve this crash.

Should qemu enforce that memory regions are target page-aligned? In which
case, TPM CRB MMIO region would overlap with the RAM region, and I wonder
how that turns out to be, and if other devices would be impacted etc

Or should kdump learn to handle non-aligned blocks somehow? I think that
option should make a reasonable solution, as long as we only have
empty/zero-memory "gaps". Handling other cases of joint or overlapping
regions seems more difficult.

thanks

-- 
Marc-André Lureau

[-- Attachment #2: Type: text/html, Size: 1625 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Page alignment & memory regions expectations
  2022-08-24 12:43 Page alignment & memory regions expectations Marc-André Lureau
@ 2022-08-24 16:42 ` David Hildenbrand
  2022-08-24 19:55   ` Peter Maydell
  0 siblings, 1 reply; 7+ messages in thread
From: David Hildenbrand @ 2022-08-24 16:42 UTC (permalink / raw)
  To: Marc-André Lureau, QEMU; +Cc: Paolo Bonzini, Stefan Berger, qiaonuohan

On 24.08.22 14:43, Marc-André Lureau wrote:
> Hi,

Hi!

> 
> tpm-crb creates a "tpm-crb-cmd" RAM memory region that is not page
> aligned. Apparently, this is not a problem for QEMU in general. However,
> it crashes kdump'ing in dump.c:get_next_page, as it expects

I assume you mean "dumping in kdump format".

> GuestPhysBlock to be page-aligned. (see also bug
> https://bugzilla.redhat.com/show_bug.cgi?id=2120480
> <https://bugzilla.redhat.com/show_bug.cgi?id=2120480>)
> 
> Here is some relevant DEBUG_GUEST_PHYS_REGION_ADD log:
> guest_phys_block_add_section: target_start=00000000fd000000
> target_end=00000000fe000000: added (count: 3)
> guest_phys_block_add_section: target_start=00000000fed40080
> target_end=00000000fed41000: added (count: 4)
> guest_phys_block_add_section: target_start=00000000fffc0000
> target_end=0000000100000000: added (count: 5)
> 
> I am looking for ideas on how to solve this crash.

Do we care if we don't include everything in the dump? I recall that
e.g., vfio will simply align and not care about such partial RAM blocks.


One idea is doing another pass over the list at the end (after possible
merging of sections) and making sure everything is page-aligned.

Another idea is specifying somehow that that memory region should simply
not be dumped ...


But I do wonder why the ram memory region that's mapped into the guest
physical address space has such a weird alignment/size ...

> 
> Should qemu enforce that memory regions are target page-aligned? In

... can we simply fixup tpm-crb-cmd?

> which case, TPM CRB MMIO region would overlap with the RAM region, and I
> wonder how that turns out to be, and if other devices would be impacted etc
> 
> Or should kdump learn to handle non-aligned blocks somehow? I think that
> option should make a reasonable solution, as long as we only have
> empty/zero-memory "gaps". Handling other cases of joint or overlapping
> regions seems more difficult.

Right, you'd actually have to pad the remainder with zeroes.


-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Page alignment & memory regions expectations
  2022-08-24 16:42 ` David Hildenbrand
@ 2022-08-24 19:55   ` Peter Maydell
  2022-08-25  7:26     ` David Hildenbrand
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Maydell @ 2022-08-24 19:55 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Marc-André Lureau, QEMU, Paolo Bonzini, Stefan Berger,
	qiaonuohan

On Wed, 24 Aug 2022 at 17:43, David Hildenbrand <david@redhat.com> wrote:
> One idea is doing another pass over the list at the end (after possible
> merging of sections) and making sure everything is page-aligned.
>
> Another idea is specifying somehow that that memory region should simply
> not be dumped ...
>
>
> But I do wonder why the ram memory region that's mapped into the guest
> physical address space has such a weird alignment/size ...

Lumps of memory can be any size you like and anywhere in
memory you like. Sometimes we are modelling real hardware
that has done something like that. Sometimes it's just
a convenient way to model a device. Generic code in
QEMU does need to cope with this...

-- PMM


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Page alignment & memory regions expectations
  2022-08-24 19:55   ` Peter Maydell
@ 2022-08-25  7:26     ` David Hildenbrand
  2022-08-25 11:47       ` Peter Maydell
  0 siblings, 1 reply; 7+ messages in thread
From: David Hildenbrand @ 2022-08-25  7:26 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Marc-André Lureau, QEMU, Paolo Bonzini, Stefan Berger,
	qiaonuohan

On 24.08.22 21:55, Peter Maydell wrote:
> On Wed, 24 Aug 2022 at 17:43, David Hildenbrand <david@redhat.com> wrote:
>> One idea is doing another pass over the list at the end (after possible
>> merging of sections) and making sure everything is page-aligned.
>>
>> Another idea is specifying somehow that that memory region should simply
>> not be dumped ...
>>
>>
>> But I do wonder why the ram memory region that's mapped into the guest
>> physical address space has such a weird alignment/size ...
> 
> Lumps of memory can be any size you like and anywhere in
> memory you like. Sometimes we are modelling real hardware
> that has done something like that. Sometimes it's just
> a convenient way to model a device. Generic code in
> QEMU does need to cope with this...

But we are talking about system RAM here. And judging by the fact that
this is the first time dump.c blows up like this, this doesn't seem to
very common, no?


-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Page alignment & memory regions expectations
  2022-08-25  7:26     ` David Hildenbrand
@ 2022-08-25 11:47       ` Peter Maydell
  2022-08-25 11:57         ` David Hildenbrand
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Maydell @ 2022-08-25 11:47 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Marc-André Lureau, QEMU, Paolo Bonzini, Stefan Berger,
	qiaonuohan

On Thu, 25 Aug 2022 at 08:27, David Hildenbrand <david@redhat.com> wrote:
> On 24.08.22 21:55, Peter Maydell wrote:
> > Lumps of memory can be any size you like and anywhere in
> > memory you like. Sometimes we are modelling real hardware
> > that has done something like that. Sometimes it's just
> > a convenient way to model a device. Generic code in
> > QEMU does need to cope with this...
>
> But we are talking about system RAM here. And judging by the fact that
> this is the first time dump.c blows up like this, this doesn't seem to
> very common, no?

What's your definition of "system RAM", though? The biggest
bit of RAM in the system? Anything over X bytes? Whatever
the machine set up as MachineState::ram ? As currently
written, dump.c is operating on every RAM MemoryRegion
in the system, which includes a lot of things which aren't
"system RAM" (for instance, it includes framebuffers and
ROMs).

-- PMM


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Page alignment & memory regions expectations
  2022-08-25 11:47       ` Peter Maydell
@ 2022-08-25 11:57         ` David Hildenbrand
  2022-08-25 12:18           ` Peter Maydell
  0 siblings, 1 reply; 7+ messages in thread
From: David Hildenbrand @ 2022-08-25 11:57 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Marc-André Lureau, QEMU, Paolo Bonzini, Stefan Berger,
	qiaonuohan

On 25.08.22 13:47, Peter Maydell wrote:
> On Thu, 25 Aug 2022 at 08:27, David Hildenbrand <david@redhat.com> wrote:
>> On 24.08.22 21:55, Peter Maydell wrote:
>>> Lumps of memory can be any size you like and anywhere in
>>> memory you like. Sometimes we are modelling real hardware
>>> that has done something like that. Sometimes it's just
>>> a convenient way to model a device. Generic code in
>>> QEMU does need to cope with this...
>>
>> But we are talking about system RAM here. And judging by the fact that
>> this is the first time dump.c blows up like this, this doesn't seem to
>> very common, no?
> 
> What's your definition of "system RAM", though? The biggest

I'd say any RAM memory region that lives in address_space_memory /
get_system_memory(). That's what softmmu/memory_mapping.c cares about
and where we bail out here.


> bit of RAM in the system? Anything over X bytes? Whatever
> the machine set up as MachineState::ram ? As currently
> written, dump.c is operating on every RAM MemoryRegion
> in the system, which includes a lot of things which aren't
> "system RAM" (for instance, it includes framebuffers and
> ROMs).

Anything in address_space_memory / get_system_memory(), correct. And
this seems to be the first time that we fail here, so it's either a case
we should be handling in dump code (as you indicate) or some case we
shouldn't have to worry about (as I questioned).

> 
> -- PMM
> 


-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Page alignment & memory regions expectations
  2022-08-25 11:57         ` David Hildenbrand
@ 2022-08-25 12:18           ` Peter Maydell
  0 siblings, 0 replies; 7+ messages in thread
From: Peter Maydell @ 2022-08-25 12:18 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Marc-André Lureau, QEMU, Paolo Bonzini, Stefan Berger,
	qiaonuohan

On Thu, 25 Aug 2022 at 12:57, David Hildenbrand <david@redhat.com> wrote:
>
> On 25.08.22 13:47, Peter Maydell wrote:
> > On Thu, 25 Aug 2022 at 08:27, David Hildenbrand <david@redhat.com> wrote:
> >> On 24.08.22 21:55, Peter Maydell wrote:
> >>> Lumps of memory can be any size you like and anywhere in
> >>> memory you like. Sometimes we are modelling real hardware
> >>> that has done something like that. Sometimes it's just
> >>> a convenient way to model a device. Generic code in
> >>> QEMU does need to cope with this...
> >>
> >> But we are talking about system RAM here. And judging by the fact that
> >> this is the first time dump.c blows up like this, this doesn't seem to
> >> very common, no?
> >
> > What's your definition of "system RAM", though? The biggest
>
> I'd say any RAM memory region that lives in address_space_memory /
> get_system_memory(). That's what softmmu/memory_mapping.c cares about
> and where we bail out here.
>
>
> > bit of RAM in the system? Anything over X bytes? Whatever
> > the machine set up as MachineState::ram ? As currently
> > written, dump.c is operating on every RAM MemoryRegion
> > in the system, which includes a lot of things which aren't
> > "system RAM" (for instance, it includes framebuffers and
> > ROMs).
>
> Anything in address_space_memory / get_system_memory(), correct. And
> this seems to be the first time that we fail here, so it's either a case
> we should be handling in dump code (as you indicate) or some case we
> shouldn't have to worry about (as I questioned).

I suspect that most of the odd-alignment things are not going
to be ones you really care about having in a dump, but the
difficulty is going to be defining what counts as "a region
we don't care about", because we don't really have "purposes"
attached to MemoryRegions. So in practice the dump code is
going to have to either (a) be able to put odd-alignment
regions into the dump, and put them all in or (b) skip all of
them, regardless. Chances of anybody noticing a difference
between a and b in practice seem minimal :-)

-- PMM


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-08-25 12:22 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-08-24 12:43 Page alignment & memory regions expectations Marc-André Lureau
2022-08-24 16:42 ` David Hildenbrand
2022-08-24 19:55   ` Peter Maydell
2022-08-25  7:26     ` David Hildenbrand
2022-08-25 11:47       ` Peter Maydell
2022-08-25 11:57         ` David Hildenbrand
2022-08-25 12:18           ` Peter Maydell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).