From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Alex Braunegg <alex.braunegg@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [QUESTION] x86_64 -> i386/i686 CPU translation between xl and qemu binary?
Date: Fri, 5 Feb 2016 00:04:09 +0000 [thread overview]
Message-ID: <56B3E6F9.7010007@citrix.com> (raw)
In-Reply-To: <56b3e268.9121620a.6da33.ffff9f20@mx.google.com>
On 04/02/2016 23:44, Alex Braunegg wrote:
> Hi Andrew,
>
> I don't know enough on the internals of xen / qemu here - however, based on
> what you said, an x86_64 OS with >4Gb memory should boot via xl - however in
> my case here it fails to start up:
>
> -------------------------------------------------------------
>
> [root@mynas-s5000xvn ~]# xl -vvvv create /etc/xen/config/Windows_2008_R2.cfg
>
> Parsing config from /etc/xen/config/Windows_2008_R2.cfg
> libxl: debug: libxl_create.c:1560:do_domain_create: ao 0x735690: create:
> how=(nil) callback=(nil) poller=0x734b70
> libxl: debug: libxl_device.c:269:libxl__device_disk_set_backend: Disk
> vdev=hda spec.backend=unknown
> libxl: debug: libxl_device.c:298:libxl__device_disk_set_backend: Disk
> vdev=hda, using backend phy
> libxl: debug: libxl_device.c:269:libxl__device_disk_set_backend: Disk
> vdev=hdc spec.backend=unknown
> libxl: debug: libxl_device.c:298:libxl__device_disk_set_backend: Disk
> vdev=hdc, using backend phy
> libxl: debug: libxl_create.c:945:initiate_domain_create: running bootloader
> libxl: debug: libxl_bootloader.c:324:libxl__bootloader_run: not a PV domain,
> skipping bootloader
> libxl: debug: libxl_event.c:691:libxl__ev_xswatch_deregister: watch
> w=0x736078: deregister unregistered
> xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x5b3a4
> xc: detail: elf_parse_binary: memory: 0x100000 -> 0x15b3a4
> xc: detail: VIRTUAL MEMORY ARRANGEMENT:
> xc: detail: Loader: 0000000000100000->000000000015b3a4
> xc: detail: Modules: 0000000000000000->0000000000000000
> xc: detail: TOTAL: 0000000000000000->000000017f000000
> xc: detail: ENTRY: 0000000000100600
> xc: detail: PHYSICAL MEMORY ALLOCATION:
> xc: detail: 4KB PAGES: 0x0000000000000200
> xc: detail: 2MB PAGES: 0x00000000000005f7
> xc: detail: 1GB PAGES: 0x0000000000000003
> xc: detail: elf_load_binary: phdr 0 at 0x7efc906d7000 -> 0x7efc90728910
> xc: error: Could not clear special pages (22 = Invalid argument): Internal
> error
> libxl: error: libxl_dom.c:1003:libxl__build_hvm: hvm building failed
The problem is here, but it isn't immediately apparent why (other than
"failed to map special pages").
> Guest Config:
> =============
>
> -------------------------------------------------------------
>
> builder='hvm'
> memory = 6144
> shadow_memory = 8
These values look suspicious to me. What hardware is this running on?
Either you are on more modern hardware and shouldn't touch
shadow_memory, or you are on rather older hardware, at which point 8M of
shadow memory is probably too small for a 6G VM.
~Andrew
next prev parent reply other threads:[~2016-02-05 0:04 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-04 22:06 [QUESTION] x86_64 -> i386/i686 CPU translation between xl and qemu binary? Alex Braunegg
2016-02-04 22:22 ` Andrew Cooper
2016-02-04 23:14 ` Steven Haigh
2016-02-05 0:12 ` Andrew Cooper
2016-02-05 9:56 ` Ian Campbell
2016-02-04 23:44 ` Alex Braunegg
2016-02-05 0:04 ` Andrew Cooper [this message]
2016-02-05 0:18 ` Alex Braunegg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56B3E6F9.7010007@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=alex.braunegg@gmail.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).