From: Julien Grall <julien.grall@arm.com>
To: Vijay Kilari <vijay.kilari@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
Stefano Stabellini <sstabellini@kernel.org>,
Andre Przywara <andre.przywara@arm.com>
Subject: Re: arm: alloc_heap_pages allocates already allocated page
Date: Tue, 7 Feb 2017 12:47:33 +0000 [thread overview]
Message-ID: <cc33cd2e-499d-ba3b-6a06-04eafa2ecae6@arm.com> (raw)
In-Reply-To: <CALicx6tDr7+BbP6JUvzyKi3sht+ZtkoZbiqqUhgUdWDFf-WHcA@mail.gmail.com>
On 07/02/2017 12:41, Vijay Kilari wrote:
> On Tue, Feb 7, 2017 at 4:58 PM, Julien Grall <julien.grall@arm.com> wrote:
>> On 07/02/2017 11:10, Vijay Kilari wrote:
>>>
>>> On Tue, Feb 7, 2017 at 3:37 PM, Julien Grall <julien.grall@arm.com> wrote:
>>>>
>>>> On 07/02/2017 08:18, Vijay Kilari wrote:
>>>>>
>>>>> I am seeing below panic with NUMA during DT mappings in
>>>>> alloc_heap_pages()
>>>>> BUG_ON(pg[i].count_info != PGC_state_free);
>>>>> However, this issue is not there with 4.7 version. The same NUMA board
>>>>> boots fine.
>>>>
>>>>
>>>>
>>>> I am a bit confused by what you are saying. Xen on ARM does not yet
>>>> support
>>>> NUMA. I also know you are working on NUMA support. So does the BUG happen
>>>> on
>>>> upstream xen or upstream xen + your patches?
>>>
>>>
>>> I was testing with Andre ITS patches (RFC version 1) + my NUMA patches
>>> + upstream xen.
>>> However now I tested with upstream xen + Andre ITS patches (staging
>>> branch) on NUMA board.
>>
>>
>> The RFC v1 is quite an old version. Please give a try using the latest
>> version [1].
>>
>>> I see panic (similar to what I see with my patches). Log are here.
>>
>>
>> Well the panic is different now. An ASSERT in list_del is hit this time.
>> This looks like a memory corruption to me.
>>
>>>
>>> http://pastebin.com/QJqUBvD9
>>>
>>> The same plain upstream xen + Andre ITS patches boots fine with non-NUMA
>>> board.
>>
>>
>> I know that DOM0 cannot boot without ITS on your platform. But as you don't
>> reach DOM0, have you tried to boot without the ITS series on NUMA board?
>
> Yes, without ITS patches it is previously (first) reported panic at
> "Xen BUG at page_alloc.c:827"
Can you please paste the full log from xen upstream (no debug, no ITS,
no NUMA) and device tree memory node?
Also, please disable CONFIG_DEBUG_DEVICE_TREE it pollutes the logs and I
don't think the option is necessary to solve this problem.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-02-07 12:47 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-07 8:18 arm: alloc_heap_pages allocates already allocated page Vijay Kilari
2017-02-07 10:07 ` Julien Grall
2017-02-07 11:10 ` Vijay Kilari
2017-02-07 11:28 ` Julien Grall
2017-02-07 12:41 ` Vijay Kilari
2017-02-07 12:47 ` Julien Grall [this message]
2017-02-07 13:00 ` Julien Grall
2017-02-07 13:25 ` Vijay Kilari
2017-02-07 13:27 ` Julien Grall
2017-02-07 15:59 ` Vijay Kilari
2017-02-08 14:18 ` Julien Grall
2017-02-08 15:42 ` Julien Grall
2017-02-09 6:36 ` Vijay Kilari
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cc33cd2e-499d-ba3b-6a06-04eafa2ecae6@arm.com \
--to=julien.grall@arm.com \
--cc=andre.przywara@arm.com \
--cc=sstabellini@kernel.org \
--cc=vijay.kilari@gmail.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).