xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@arm.com>
To: Feng Kan <fkan@apm.com>
Cc: xen-devel@lists.xenproject.org, nd@arm.com,
	sstabellini@kernel.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: xl create failure on arm64 with XEN 4.9rc6
Date: Sun, 28 May 2017 18:12:17 +0100	[thread overview]
Message-ID: <15f6b693-c77a-bc9f-fbb9-b2d23a486211@arm.com> (raw)
In-Reply-To: <CAL85gmAE5Rd+YmLmW7A+chNukiXKdmZdeypRWHT=RU1eV1gvFA@mail.gmail.com>

Hi,

On 05/26/2017 11:22 PM, Feng Kan wrote:
> On Fri, May 26, 2017 at 5:40 AM, Julien Grall <julien.grall@arm.com> wrote:
>>
>>
>> On 26/05/17 01:37, Feng Kan wrote:
>>>
>>> On Thu, May 25, 2017 at 12:56 PM, Julien Grall <julien.grall@arm.com>
>>> wrote:
>>>>
>>>> (CC toolstack maintainers)
>>>>
>>>> On 25/05/2017 19:58, Feng Kan wrote:
>>>>>
>>>>>
>>>>> Hi All:
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>>> This is not specifically against the XEN 4.9. I am using 4.12rc2
>>>>> kernel on arm64 platform. Started dom0 fine with ACPI enabled, but
>>>>> failed when creating the domU guest. Xen is built natively on the
>>>>> arm64 platform. Using the same kernel and ramdisk as dom0. Any idea as
>>>>> why it is stuck here
>>>>> would be greatly appreciated?
>>>>
>>>>
>>>>
>>>> The first step would to try a stable release if you can. Also, it would
>>>> be
>>>> useful if you provide information about the guest (i.e the configuration)
>>>> and your .config for the kernel.
>>>
>>> I am using the default xen_defconfig in the arm64 directory.
>>
>>
>> I am confused. There are no xen_defconfig in the arm64 directory of the
>> kernel. So which one are you talking about?
> Sorry, my mistake.
>>
>>> This is
>>> very early on
>>> in building the domain, would the guest configuration matter?
>>
>>
>> The configuration of DOM0 kernel matters when you want to build the guest.
>> That's why I wanted to know what options you enabled.
> I see. I am using the default centos 7.2 kernel config plus enabling
> the XEN option. (Attached below)

Looking at the .config, Linux is using 64KB page granularity.

I managed to reproduce the failure (though different error) by using
an initramfs > 32MB (smaller works). The patch below should fix the
error, can you give it a try?

commit c4684b425552a8330f00d7703f3175d721992ab0
Author: Julien Grall <julien.grall@arm.com>
Date:   Sun May 28 17:50:07 2017 +0100

    xen/privcmd: Support correctly 64KB page granularity when mapping memory
    
    Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page granularity" did
    not go far enough to support 64KB in mmap_batch_fn.
    
    The variable 'nr' is the number of 4KB chunk to map. However, when Linux
    is using 64KB page granularity the array of pages (vma->vm_private_data)
    contain one page per 64KB. Fix it by incrementing st->index correctly.
    
    Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
    XEN_PAGE_SIZE.
    
    Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page granularity")
    CC: stable@vger.kernel.org
    Reported-by: Feng Kan <fkan@apm.com>
    Signed-off-by: Julien Grall <julien.grall@arm.com>

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 7a92a5e..38d9a43 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -345,7 +345,7 @@ static int mmap_batch_fn(void *data, int nr, void *state)
        int ret;
 
        if (xen_feature(XENFEAT_auto_translated_physmap))
-               cur_pages = &pages[st->index];
+               cur_pages = &pages[st->index / XEN_PFN_PER_PAGE];
 
        BUG_ON(nr < 0);
        ret = xen_remap_domain_gfn_array(st->vma, st->va & PAGE_MASK, gfnp, nr,
@@ -362,7 +362,7 @@ static int mmap_batch_fn(void *data, int nr, void *state)
                                st->global_error = 1;
                }
        }
-       st->va += PAGE_SIZE * nr;
+       st->va += XEN_PAGE_SIZE * nr;
        st->index += nr;
 
        return 0;

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-05-28 17:12 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-25 18:58 xl create failure on arm64 with XEN 4.9rc6 Feng Kan
2017-05-25 19:56 ` Julien Grall
2017-05-26  0:37   ` Feng Kan
2017-05-26 12:40     ` Julien Grall
2017-05-26 22:22       ` Feng Kan
2017-05-28 17:12         ` Julien Grall [this message]
2017-05-30 17:41           ` Feng Kan
2017-05-31 13:04             ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=15f6b693-c77a-bc9f-fbb9-b2d23a486211@arm.com \
    --to=julien.grall@arm.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=fkan@apm.com \
    --cc=nd@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).