* p2m stuff and crash tool
@ 2016-02-16 11:35 Daniel Kiper
2016-02-16 12:55 ` Juergen Gross
0 siblings, 1 reply; 6+ messages in thread
From: Daniel Kiper @ 2016-02-16 11:35 UTC (permalink / raw)
To: jgross
Cc: boris.ostrovsky, david.vrabel, konrad.wilk, crash-utility,
linux-kernel, xen-devel
Hey Juergen,
As I saw you are strongly playing with p2m stuff, so,
I hope that you can enlighten me a bit in that area.
OVM, Oracle product, uses as dom0 kernel Linux 3.8.13
(yep, I know this is very ancient stuff) with a lot of
backports. Among them there is commit 2c185687ab016954557aac80074f5d7f7f5d275c
(x86/xen: delay construction of mfn_list_list). After
an investigation I discovered that it breaks crash tool.
It fails with following message:
crash: read error: kernel virtual address: ffff88027ce0b700 type: "current_task (per_cpu)"
crash: read error: kernel virtual address: ffff88027ce2b700 type: "current_task (per_cpu)"
crash: read error: kernel virtual address: ffff88027ce4b700 type: "current_task (per_cpu)"
crash: read error: kernel virtual address: ffff88027ce6b700 type: "current_task (per_cpu)"
crash: read error: kernel virtual address: ffff88027ce10c64 type: "tss_struct ist array"
Addresses and symbols depends on a given build.
The problem is that xen_max_p2m_pfn in xen_build_mfn_list_list()
is equal to xen_start_info->nr_pages. This means that memory
which is above due to some remapping/relocation (usually it is
small fraction) is not mapped via p2m_top_mfn and p2m_top_mfn_p.
I should mention here that Xen is started with e.g. dom0_mem=1g,max:1g.
If I remove max argument then crash works because xen_max_p2m_pfn
is greater than xen_start_info->nr_pages. Additionally, the issue
could be fixed by replacing xen_max_p2m_pfn in xen_build_mfn_list_list()
with max_pfn.
After that I decided to take a look at Linux kernel upstream. I saw
that xen_max_p2m_pfn in xen_build_mfn_list_list() is equal to "the
end of last usable machine memory region available for a given
dom0_mem argument + something", e.g.
For dom0_mem=1g,max:1g:
(XEN) Xen-e820 RAM map:
(XEN) 0000000000000000 - 000000000009fc00 (usable)
(XEN) 000000000009fc00 - 00000000000a0000 (reserved)
(XEN) 00000000000f0000 - 0000000000100000 (reserved)
(XEN) 0000000000100000 - 000000007ffdf000 (usable) <--- HERE
(XEN) 000000007ffdf000 - 0000000080000000 (reserved)
(XEN) 00000000b0000000 - 00000000c0000000 (reserved)
(XEN) 00000000feffc000 - 00000000ff000000 (reserved)
(XEN) 00000000fffc0000 - 0000000100000000 (reserved)
(XEN) 0000000100000000 - 0000000180000000 (usable)
Hence xen_max_p2m_pfn == 0x80000
Later I reviewed most of your p2m related commits and I realized
that you played whack-a-mole game with p2m bugs. Sadly, I was not
able to identify exactly one (or more) commit which would fix the
same issue (well, there are some which fixes similar stuff but not
the same one described above). So, if you explain to me why
xen_max_p2m_pfn is set to that value and does not e.g. max_pfn then
it will be much easier for me to write proper fix and maybe fix
the same issue in upstream kernel if it is needed (well, crash
tool does not work with new p2m layout so first of all I must fix it;
I hope that you will help me to that sooner or later).
Additionally, during that work I realized that p2m_top (xen_p2m_addr
in latest Linux kernel) and p2m_top_mfn differs. As I saw p2m_top
represents all stuff (memory, missing, identity, etc.) found in PV
guest address space. However, p2m_top_mfn is just limited to memory
and missing things. Taking into account that p2m_top_mfn is used just
for migration and crash tool it looks that it is sufficient. Am I correct?
Am I not missing any detail?
Daniel
PS I am sending this to wider forum because I think that it
is worth spreading knowledge even if it is not strictly
related to latest Xen or Linux kernel developments.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: p2m stuff and crash tool
2016-02-16 11:35 p2m stuff and crash tool Daniel Kiper
@ 2016-02-16 12:55 ` Juergen Gross
2016-02-17 13:59 ` Daniel Kiper
0 siblings, 1 reply; 6+ messages in thread
From: Juergen Gross @ 2016-02-16 12:55 UTC (permalink / raw)
To: Daniel Kiper
Cc: boris.ostrovsky, david.vrabel, konrad.wilk, crash-utility,
linux-kernel, xen-devel
Hi Daniel,
On 16/02/16 12:35, Daniel Kiper wrote:
> Hey Juergen,
>
> As I saw you are strongly playing with p2m stuff, so,
> I hope that you can enlighten me a bit in that area.
Yes, the p2m stuff is always fun. :-)
> OVM, Oracle product, uses as dom0 kernel Linux 3.8.13
> (yep, I know this is very ancient stuff) with a lot of
> backports. Among them there is commit 2c185687ab016954557aac80074f5d7f7f5d275c
> (x86/xen: delay construction of mfn_list_list). After
> an investigation I discovered that it breaks crash tool.
> It fails with following message:
>
> crash: read error: kernel virtual address: ffff88027ce0b700 type: "current_task (per_cpu)"
> crash: read error: kernel virtual address: ffff88027ce2b700 type: "current_task (per_cpu)"
> crash: read error: kernel virtual address: ffff88027ce4b700 type: "current_task (per_cpu)"
> crash: read error: kernel virtual address: ffff88027ce6b700 type: "current_task (per_cpu)"
> crash: read error: kernel virtual address: ffff88027ce10c64 type: "tss_struct ist array"
>
> Addresses and symbols depends on a given build.
>
> The problem is that xen_max_p2m_pfn in xen_build_mfn_list_list()
> is equal to xen_start_info->nr_pages. This means that memory
> which is above due to some remapping/relocation (usually it is
> small fraction) is not mapped via p2m_top_mfn and p2m_top_mfn_p.
> I should mention here that Xen is started with e.g. dom0_mem=1g,max:1g.
> If I remove max argument then crash works because xen_max_p2m_pfn
> is greater than xen_start_info->nr_pages. Additionally, the issue
> could be fixed by replacing xen_max_p2m_pfn in xen_build_mfn_list_list()
> with max_pfn.
>
> After that I decided to take a look at Linux kernel upstream. I saw
> that xen_max_p2m_pfn in xen_build_mfn_list_list() is equal to "the
> end of last usable machine memory region available for a given
> dom0_mem argument + something", e.g.
>
> For dom0_mem=1g,max:1g:
>
> (XEN) Xen-e820 RAM map:
> (XEN) 0000000000000000 - 000000000009fc00 (usable)
> (XEN) 000000000009fc00 - 00000000000a0000 (reserved)
> (XEN) 00000000000f0000 - 0000000000100000 (reserved)
> (XEN) 0000000000100000 - 000000007ffdf000 (usable) <--- HERE
> (XEN) 000000007ffdf000 - 0000000080000000 (reserved)
> (XEN) 00000000b0000000 - 00000000c0000000 (reserved)
> (XEN) 00000000feffc000 - 00000000ff000000 (reserved)
> (XEN) 00000000fffc0000 - 0000000100000000 (reserved)
> (XEN) 0000000100000000 - 0000000180000000 (usable)
>
> Hence xen_max_p2m_pfn == 0x80000
>
> Later I reviewed most of your p2m related commits and I realized
> that you played whack-a-mole game with p2m bugs. Sadly, I was not
> able to identify exactly one (or more) commit which would fix the
> same issue (well, there are some which fixes similar stuff but not
> the same one described above). So, if you explain to me why
> xen_max_p2m_pfn is set to that value and does not e.g. max_pfn then
> it will be much easier for me to write proper fix and maybe fix
> the same issue in upstream kernel if it is needed (well, crash
> tool does not work with new p2m layout so first of all I must fix it;
> I hope that you will help me to that sooner or later).
The reason for setting xen_max_p2m_pfn to nr_pages initially is it's
usage in __pfn_to_mfn(): this must work with the initial p2m list
supplied by the hypervisor which just has only nr_pages entries.
Later it is updated to the number of entries the linear p2m list is
able to hold. This size has to include possible hotplugged memory
in prder to be able to make use of that memory later (remember: the
p2m list's size is limited by the virtual space allocated for it via
xen_vmalloc_p2m_tree()).
> Additionally, during that work I realized that p2m_top (xen_p2m_addr
> in latest Linux kernel) and p2m_top_mfn differs. As I saw p2m_top
> represents all stuff (memory, missing, identity, etc.) found in PV
> guest address space. However, p2m_top_mfn is just limited to memory
> and missing things. Taking into account that p2m_top_mfn is used just
> for migration and crash tool it looks that it is sufficient. Am I correct?
> Am I not missing any detail?
Basically p2m_top and p2m_top_mfn hold the same information. p2m_top has
just some special mappings for identity pages: they translate to
"invalid" mfns just as in p2m_top_mfn, but via dedicated pages which are
identified by comparing their addresses (or pfns) in order to detect
the identity pages.
As you thought: this distinction isn't necessary for p2m_top_mfn, so it
can be omitted there.
>
> Daniel
>
> PS I am sending this to wider forum because I think that it
> is worth spreading knowledge even if it is not strictly
> related to latest Xen or Linux kernel developments.
OTOH: what was hard to write should be hard to read. ;-)
Feel free to ask further questions.
Juergen
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: p2m stuff and crash tool
2016-02-16 12:55 ` Juergen Gross
@ 2016-02-17 13:59 ` Daniel Kiper
2016-02-17 14:27 ` Juergen Gross
0 siblings, 1 reply; 6+ messages in thread
From: Daniel Kiper @ 2016-02-17 13:59 UTC (permalink / raw)
To: Juergen Gross
Cc: boris.ostrovsky, david.vrabel, konrad.wilk, crash-utility,
linux-kernel, xen-devel
On Tue, Feb 16, 2016 at 01:55:33PM +0100, Juergen Gross wrote:
> Hi Daniel,
>
> On 16/02/16 12:35, Daniel Kiper wrote:
> > Hey Juergen,
[...]
> > After that I decided to take a look at Linux kernel upstream. I saw
> > that xen_max_p2m_pfn in xen_build_mfn_list_list() is equal to "the
> > end of last usable machine memory region available for a given
> > dom0_mem argument + something", e.g.
> >
> > For dom0_mem=1g,max:1g:
> >
> > (XEN) Xen-e820 RAM map:
> > (XEN) 0000000000000000 - 000000000009fc00 (usable)
> > (XEN) 000000000009fc00 - 00000000000a0000 (reserved)
> > (XEN) 00000000000f0000 - 0000000000100000 (reserved)
> > (XEN) 0000000000100000 - 000000007ffdf000 (usable) <--- HERE
> > (XEN) 000000007ffdf000 - 0000000080000000 (reserved)
> > (XEN) 00000000b0000000 - 00000000c0000000 (reserved)
> > (XEN) 00000000feffc000 - 00000000ff000000 (reserved)
> > (XEN) 00000000fffc0000 - 0000000100000000 (reserved)
> > (XEN) 0000000100000000 - 0000000180000000 (usable)
> >
> > Hence xen_max_p2m_pfn == 0x80000
> >
> > Later I reviewed most of your p2m related commits and I realized
> > that you played whack-a-mole game with p2m bugs. Sadly, I was not
> > able to identify exactly one (or more) commit which would fix the
> > same issue (well, there are some which fixes similar stuff but not
> > the same one described above). So, if you explain to me why
> > xen_max_p2m_pfn is set to that value and does not e.g. max_pfn then
> > it will be much easier for me to write proper fix and maybe fix
> > the same issue in upstream kernel if it is needed (well, crash
> > tool does not work with new p2m layout so first of all I must fix it;
> > I hope that you will help me to that sooner or later).
>
> The reason for setting xen_max_p2m_pfn to nr_pages initially is it's
> usage in __pfn_to_mfn(): this must work with the initial p2m list
> supplied by the hypervisor which just has only nr_pages entries.
That make sense.
> Later it is updated to the number of entries the linear p2m list is
> able to hold. This size has to include possible hotplugged memory
> in prder to be able to make use of that memory later (remember: the
> p2m list's size is limited by the virtual space allocated for it via
> xen_vmalloc_p2m_tree()).
However, I have memory hotplug disabled and kernel set xen_max_p2m_pfn
to 0x80000 (2 Gib) even if dom0 memory was set to 1 GiB. Hmmm... Why?
I suppose that if xen_max_p2m_pfn == max_pfn then everything should work.
Am I missing something?
Daniel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: p2m stuff and crash tool
2016-02-17 13:59 ` Daniel Kiper
@ 2016-02-17 14:27 ` Juergen Gross
2016-02-17 14:52 ` Daniel Kiper
0 siblings, 1 reply; 6+ messages in thread
From: Juergen Gross @ 2016-02-17 14:27 UTC (permalink / raw)
To: Daniel Kiper
Cc: boris.ostrovsky, david.vrabel, konrad.wilk, crash-utility,
linux-kernel, xen-devel
On 17/02/16 14:59, Daniel Kiper wrote:
> On Tue, Feb 16, 2016 at 01:55:33PM +0100, Juergen Gross wrote:
>> Hi Daniel,
>>
>> On 16/02/16 12:35, Daniel Kiper wrote:
>>> Hey Juergen,
>
> [...]
>
>>> After that I decided to take a look at Linux kernel upstream. I saw
>>> that xen_max_p2m_pfn in xen_build_mfn_list_list() is equal to "the
>>> end of last usable machine memory region available for a given
>>> dom0_mem argument + something", e.g.
>>>
>>> For dom0_mem=1g,max:1g:
>>>
>>> (XEN) Xen-e820 RAM map:
>>> (XEN) 0000000000000000 - 000000000009fc00 (usable)
>>> (XEN) 000000000009fc00 - 00000000000a0000 (reserved)
>>> (XEN) 00000000000f0000 - 0000000000100000 (reserved)
>>> (XEN) 0000000000100000 - 000000007ffdf000 (usable) <--- HERE
>>> (XEN) 000000007ffdf000 - 0000000080000000 (reserved)
>>> (XEN) 00000000b0000000 - 00000000c0000000 (reserved)
>>> (XEN) 00000000feffc000 - 00000000ff000000 (reserved)
>>> (XEN) 00000000fffc0000 - 0000000100000000 (reserved)
>>> (XEN) 0000000100000000 - 0000000180000000 (usable)
>>>
>>> Hence xen_max_p2m_pfn == 0x80000
>>>
>>> Later I reviewed most of your p2m related commits and I realized
>>> that you played whack-a-mole game with p2m bugs. Sadly, I was not
>>> able to identify exactly one (or more) commit which would fix the
>>> same issue (well, there are some which fixes similar stuff but not
>>> the same one described above). So, if you explain to me why
>>> xen_max_p2m_pfn is set to that value and does not e.g. max_pfn then
>>> it will be much easier for me to write proper fix and maybe fix
>>> the same issue in upstream kernel if it is needed (well, crash
>>> tool does not work with new p2m layout so first of all I must fix it;
>>> I hope that you will help me to that sooner or later).
>>
>> The reason for setting xen_max_p2m_pfn to nr_pages initially is it's
>> usage in __pfn_to_mfn(): this must work with the initial p2m list
>> supplied by the hypervisor which just has only nr_pages entries.
>
> That make sense.
>
>> Later it is updated to the number of entries the linear p2m list is
>> able to hold. This size has to include possible hotplugged memory
>> in prder to be able to make use of that memory later (remember: the
>> p2m list's size is limited by the virtual space allocated for it via
>> xen_vmalloc_p2m_tree()).
>
> However, I have memory hotplug disabled and kernel set xen_max_p2m_pfn
> to 0x80000 (2 Gib) even if dom0 memory was set to 1 GiB. Hmmm... Why?
> I suppose that if xen_max_p2m_pfn == max_pfn then everything should work.
> Am I missing something?
The virtual p2m list's size is aligned to PMD_SIZE (2 MB). For 1 GB dom0
memory max_pfn will be a little bit above 0x40000 due to the BIOS
area resulting in a 4 MB p2m list. And xen_max_p2m_pfn is reflecting
this size. You could reduce it to max_pfn without any problem, as long
as memory hotplug is disabled. At least I think so.
Juergen
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: p2m stuff and crash tool
2016-02-17 14:27 ` Juergen Gross
@ 2016-02-17 14:52 ` Daniel Kiper
2016-02-17 15:08 ` Juergen Gross
0 siblings, 1 reply; 6+ messages in thread
From: Daniel Kiper @ 2016-02-17 14:52 UTC (permalink / raw)
To: Juergen Gross
Cc: boris.ostrovsky, david.vrabel, konrad.wilk, crash-utility,
linux-kernel, xen-devel
On Wed, Feb 17, 2016 at 03:27:01PM +0100, Juergen Gross wrote:
> On 17/02/16 14:59, Daniel Kiper wrote:
> > On Tue, Feb 16, 2016 at 01:55:33PM +0100, Juergen Gross wrote:
> >> Hi Daniel,
> >>
> >> On 16/02/16 12:35, Daniel Kiper wrote:
> >>> Hey Juergen,
> >
> > [...]
> >
> >>> After that I decided to take a look at Linux kernel upstream. I saw
> >>> that xen_max_p2m_pfn in xen_build_mfn_list_list() is equal to "the
> >>> end of last usable machine memory region available for a given
> >>> dom0_mem argument + something", e.g.
> >>>
> >>> For dom0_mem=1g,max:1g:
> >>>
> >>> (XEN) Xen-e820 RAM map:
> >>> (XEN) 0000000000000000 - 000000000009fc00 (usable)
> >>> (XEN) 000000000009fc00 - 00000000000a0000 (reserved)
> >>> (XEN) 00000000000f0000 - 0000000000100000 (reserved)
> >>> (XEN) 0000000000100000 - 000000007ffdf000 (usable) <--- HERE
> >>> (XEN) 000000007ffdf000 - 0000000080000000 (reserved)
> >>> (XEN) 00000000b0000000 - 00000000c0000000 (reserved)
> >>> (XEN) 00000000feffc000 - 00000000ff000000 (reserved)
> >>> (XEN) 00000000fffc0000 - 0000000100000000 (reserved)
> >>> (XEN) 0000000100000000 - 0000000180000000 (usable)
> >>>
> >>> Hence xen_max_p2m_pfn == 0x80000
> >>>
> >>> Later I reviewed most of your p2m related commits and I realized
> >>> that you played whack-a-mole game with p2m bugs. Sadly, I was not
> >>> able to identify exactly one (or more) commit which would fix the
> >>> same issue (well, there are some which fixes similar stuff but not
> >>> the same one described above). So, if you explain to me why
> >>> xen_max_p2m_pfn is set to that value and does not e.g. max_pfn then
> >>> it will be much easier for me to write proper fix and maybe fix
> >>> the same issue in upstream kernel if it is needed (well, crash
> >>> tool does not work with new p2m layout so first of all I must fix it;
> >>> I hope that you will help me to that sooner or later).
> >>
> >> The reason for setting xen_max_p2m_pfn to nr_pages initially is it's
> >> usage in __pfn_to_mfn(): this must work with the initial p2m list
> >> supplied by the hypervisor which just has only nr_pages entries.
> >
> > That make sense.
> >
> >> Later it is updated to the number of entries the linear p2m list is
> >> able to hold. This size has to include possible hotplugged memory
> >> in prder to be able to make use of that memory later (remember: the
> >> p2m list's size is limited by the virtual space allocated for it via
> >> xen_vmalloc_p2m_tree()).
> >
> > However, I have memory hotplug disabled and kernel set xen_max_p2m_pfn
> > to 0x80000 (2 Gib) even if dom0 memory was set to 1 GiB. Hmmm... Why?
> > I suppose that if xen_max_p2m_pfn == max_pfn then everything should work.
> > Am I missing something?
>
> The virtual p2m list's size is aligned to PMD_SIZE (2 MB). For 1 GB dom0
> memory max_pfn will be a little bit above 0x40000 due to the BIOS
> area resulting in a 4 MB p2m list. And xen_max_p2m_pfn is reflecting
> this size. You could reduce it to max_pfn without any problem, as long
> as memory hotplug is disabled. At least I think so.
To be precise PMD_SIZE * PMDS_PER_MID_PAGE, so, it equals to 0x80000 in
this case. Why we need so huge alignment? Could not we use smaller one,
e.g. PAGE_SIZE?
Daniel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: p2m stuff and crash tool
2016-02-17 14:52 ` Daniel Kiper
@ 2016-02-17 15:08 ` Juergen Gross
0 siblings, 0 replies; 6+ messages in thread
From: Juergen Gross @ 2016-02-17 15:08 UTC (permalink / raw)
To: Daniel Kiper
Cc: boris.ostrovsky, david.vrabel, konrad.wilk, crash-utility,
linux-kernel, xen-devel
On 17/02/16 15:52, Daniel Kiper wrote:
> On Wed, Feb 17, 2016 at 03:27:01PM +0100, Juergen Gross wrote:
>> On 17/02/16 14:59, Daniel Kiper wrote:
>>> On Tue, Feb 16, 2016 at 01:55:33PM +0100, Juergen Gross wrote:
>>>> Hi Daniel,
>>>>
>>>> On 16/02/16 12:35, Daniel Kiper wrote:
>>>>> Hey Juergen,
>>>
>>> [...]
>>>
>>>>> After that I decided to take a look at Linux kernel upstream. I saw
>>>>> that xen_max_p2m_pfn in xen_build_mfn_list_list() is equal to "the
>>>>> end of last usable machine memory region available for a given
>>>>> dom0_mem argument + something", e.g.
>>>>>
>>>>> For dom0_mem=1g,max:1g:
>>>>>
>>>>> (XEN) Xen-e820 RAM map:
>>>>> (XEN) 0000000000000000 - 000000000009fc00 (usable)
>>>>> (XEN) 000000000009fc00 - 00000000000a0000 (reserved)
>>>>> (XEN) 00000000000f0000 - 0000000000100000 (reserved)
>>>>> (XEN) 0000000000100000 - 000000007ffdf000 (usable) <--- HERE
>>>>> (XEN) 000000007ffdf000 - 0000000080000000 (reserved)
>>>>> (XEN) 00000000b0000000 - 00000000c0000000 (reserved)
>>>>> (XEN) 00000000feffc000 - 00000000ff000000 (reserved)
>>>>> (XEN) 00000000fffc0000 - 0000000100000000 (reserved)
>>>>> (XEN) 0000000100000000 - 0000000180000000 (usable)
>>>>>
>>>>> Hence xen_max_p2m_pfn == 0x80000
>>>>>
>>>>> Later I reviewed most of your p2m related commits and I realized
>>>>> that you played whack-a-mole game with p2m bugs. Sadly, I was not
>>>>> able to identify exactly one (or more) commit which would fix the
>>>>> same issue (well, there are some which fixes similar stuff but not
>>>>> the same one described above). So, if you explain to me why
>>>>> xen_max_p2m_pfn is set to that value and does not e.g. max_pfn then
>>>>> it will be much easier for me to write proper fix and maybe fix
>>>>> the same issue in upstream kernel if it is needed (well, crash
>>>>> tool does not work with new p2m layout so first of all I must fix it;
>>>>> I hope that you will help me to that sooner or later).
>>>>
>>>> The reason for setting xen_max_p2m_pfn to nr_pages initially is it's
>>>> usage in __pfn_to_mfn(): this must work with the initial p2m list
>>>> supplied by the hypervisor which just has only nr_pages entries.
>>>
>>> That make sense.
>>>
>>>> Later it is updated to the number of entries the linear p2m list is
>>>> able to hold. This size has to include possible hotplugged memory
>>>> in prder to be able to make use of that memory later (remember: the
>>>> p2m list's size is limited by the virtual space allocated for it via
>>>> xen_vmalloc_p2m_tree()).
>>>
>>> However, I have memory hotplug disabled and kernel set xen_max_p2m_pfn
>>> to 0x80000 (2 Gib) even if dom0 memory was set to 1 GiB. Hmmm... Why?
>>> I suppose that if xen_max_p2m_pfn == max_pfn then everything should work.
>>> Am I missing something?
>>
>> The virtual p2m list's size is aligned to PMD_SIZE (2 MB). For 1 GB dom0
>> memory max_pfn will be a little bit above 0x40000 due to the BIOS
>> area resulting in a 4 MB p2m list. And xen_max_p2m_pfn is reflecting
>> this size. You could reduce it to max_pfn without any problem, as long
>> as memory hotplug is disabled. At least I think so.
>
> To be precise PMD_SIZE * PMDS_PER_MID_PAGE, so, it equals to 0x80000 in
> this case. Why we need so huge alignment? Could not we use smaller one,
> e.g. PAGE_SIZE?
The idea when implementing this was to use the same boundaries as the
3 level p2m tree would use. Doing so wouldn't waste any real memory, so
that's how it was coded. Additionally this solution will support
memory hotplug which shouldn't be forgotten for the common case.
So to sum it up: This alignment doesn't harm (with the exception of the
crash tool), but it makes life easier for memory hotplug and for the
construction of the p2m tree (with the current alignment I don't need
special handling of a partial valid mid level frame).
Juergen
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2016-02-17 15:08 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-16 11:35 p2m stuff and crash tool Daniel Kiper
2016-02-16 12:55 ` Juergen Gross
2016-02-17 13:59 ` Daniel Kiper
2016-02-17 14:27 ` Juergen Gross
2016-02-17 14:52 ` Daniel Kiper
2016-02-17 15:08 ` Juergen Gross
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).