xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: manish jaggi <manishjaggi.oss@gmail.com>
Cc: xen-users@lists.xen.org, manish.jaggi@caviumnetworks.com,
	Vijay Kilari <vijay.kilari@gmail.com>,
	xen-devel@lists.xenproject.org
Subject: Re: How to dump vcpu regs when a domain is killed during xl create
Date: Tue, 19 Aug 2014 11:54:51 +0100	[thread overview]
Message-ID: <53F32CFB.1030606@citrix.com> (raw)
In-Reply-To: <CAAiw7Jn2xZGXFK1VXcy2m3kCZ18Z8i8joUk3yW-Lt69qf0+4jw@mail.gmail.com>

On 19/08/14 11:51, manish jaggi wrote:
> On 19 August 2014 15:35, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 19/08/14 11:02, manish jaggi wrote:
>>> On 19 August 2014 14:31, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>>> On 19/08/14 06:11, manish jaggi wrote:
>>>>> Adding the question on xen-devel
>>>>>
>>>>> On 18 August 2014 14:08, manish jaggi <manishjaggi.oss@gmail.com> wrote:
>>>>>> I tried to start a domain using xl create, which showed some xen logs
>>>>>> and then hanged for a minute or so and then displayed a messgae
>>>>>> killed. Below is the log
>>>>>>
>>>>>> linux:~ # xl create domU.cfg
>>>>>> Parsing config from domU.cfg
>>>>>> (XEN)  ....
>>>>>> Killed
>>>>>>
>>>>>> Is there a way to know why it was killed and dump core regs (vcpu
>>>>>> regs) at the point it was killed.
>>>> There is no guarantee the domain has successfully started.
>>>>
>>>> Put "loglvl=all guest_loglvl=all" on the Xen command line, reboot, and
>>>> attach the results of
>>>>
>>>> xl -vvvv create domU.cfg
>>>>
>>>> and xl dmesg after the domain has failed in this way.
>>>>
>>>> ~Andrew
>>> Below is the log
>>> linux:~ # xl -vvv create domU.cfg -d -p
>>> Parsing config from domU.cfg
>>> {
>>>     "domid": null,
>>>     "config": {
>>>         "c_info": {
>>>             "type": "pv",
>>>             "name": "guest",
>>>             "uuid": "e5cb14f1-085d-4511-bca5-d6e3a0c35672",
>>>             "run_hotplug_scripts": "True"
>>>         },
>>>         "b_info": {
>>>             "max_vcpus": 1,
>>>             "avail_vcpus": [
>>>                 0
>>>             ],
>>>             "max_memkb": 262144,
>>>             "target_memkb": 262144,
>>>             "shadow_memkb": 3072,
>>>             "sched_params": {
>>>
>>>             },
>>>             "claim_mode": "True",
>>>             "type.pv": {
>>>                 "kernel": "/root/Image",
>>>                 "cmdline": "console=hvc0 root=/dev/xvda ro"
>>>             }
>>>         },
>>>         "disks": [
>>>             {
>>>                 "pdev_path": "/dev/loop0",
>>>                 "vdev": "xvda",
>>>                 "format": "raw",
>>>                 "readwrite": 1
>>>             }
>>>         ],
>>>         "on_reboot": "restart"
>>>     }
>>> }
>>>
>>> libxl: verbose:
>>> libxl_create.c:134:libxl__domain_build_info_setdefault: qemu-xen is
>>> unavailable, use qemu-xen-traditional instead: No such file or
>>> directory
>>> libxl: debug: libxl_create.c:1401:do_domain_create: ao 0xd297c20:
>>> create: how=(nil) callback=(nil) poller=0xd293d90
>>> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
>>> vdev=xvda spec.backend=unknown
>>> libxl: debug: libxl_device.c:280:libxl__device_disk_set_backend: Disk
>>> vdev=xvda, using backend phy
>>> libxl: debug: libxl_create.c:851:initiate_domain_create: running bootloader
>>> libxl: debug: libxl_bootloader.c:329:libxl__bootloader_run: no
>>> bootloader configured, using user supplied kernel
>>> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch
>>> w=0xd294648: deregister unregistered
>>> domainbuilder: detail: xc_dom_allocate: cmdline="console=hvc0
>>> root=/dev/xvda ro", features="(null)"
>>> libxl: debug: libxl_dom.c:410:libxl__build_pv: pv kernel mapped 0 path
>>> /root/Image
>>> domainbuilder: detail: xc_dom_kernel_file: filename="/root/Image"
>>> domainbuilder: detail: xc_dom_malloc_filemap    : 8354 kB
>>> domainbuilder: detail: xc_dom_boot_xen_init: ver 4.5, caps
>>> xen-3.0-aarch64 xen-3.0-armv7l
>>> domainbuilder: detail: xc_dom_rambase_init: RAM starts at 40000
>>> domainbuilder: detail: xc_dom_parse_image: called
>>> domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
>>> domainbuilder: detail: loader probe failed
>>> domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64)
>>> loader ...
>>> domainbuilder: detail: loader probe OK
>>> domainbuilder: detail: xc_dom_parse_zimage64_kernel: called
>>> domainbuilder: detail: xc_dom_parse_zimage64_kernel: xen-3.0-aarch64:
>>> 0x40080000 -> 0x408a8b50
>>> libxl: debug: libxl_arm.c:474:libxl__arch_domain_init_hw_description:
>>> constructing DTB for Xen version 4.5 guest
>>> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
>>> node /memory@40000000
>>> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
>>> node /memory@200000000
>>> libxl: debug: libxl_arm.c:539:libxl__arch_domain_init_hw_description:
>>> fdt total size 1218
>>> domainbuilder: detail: xc_dom_devicetree_mem: called
>>> domainbuilder: detail: xc_dom_mem_init: mem 256 MB, pages 0x10000 pages, 4k each
>>> domainbuilder: detail: xc_dom_mem_init: 0x10000 pages
>>> domainbuilder: detail: xc_dom_boot_mem_init: called
>>> domainbuilder: detail: set_mode: guest xen-3.0-aarch64, address size 6
>>> domainbuilder: detail: xc_dom_malloc            : 512 kB
>>> domainbuilder: detail: populate_guest_memory: populating RAM @
>>> 0000000040000000-0000000050000000 (256MB)
>>> domainbuilder: detail: populate_one_size: populated 0x80/0x80 entries
>>> with shift 9
>>> domainbuilder: detail: arch_setup_meminit: placing boot modules at 0x48000000
>>> domainbuilder: detail: arch_setup_meminit: devicetree: 0x48000000 -> 0x48001000
>>> libxl: debug: libxl_arm.c:570:finalise_one_memory_node: Populating
>>> placeholder node /memory@40000000
>>> libxl: debug: libxl_arm.c:564:finalise_one_memory_node: Nopping out
>>> placeholder node /memory@200000000
>>> domainbuilder: detail: xc_dom_build_image: called
>>> domainbuilder: detail: xc_dom_alloc_segment:   kernel       :
>>> 0x40080000 -> 0x408a9000  (pfn 0x40080 + 0x829 pages)
>>> Killed
>> That is only half of the items I asked for, but this indicates that
>> something in dom0 killed the domain builder while it was constructing
>> the domain.
>>
>> Try consulting dom0's dmesg.
>>
>> ~Andrew
> It appears the xl is killed by OOM killer. Her is the log. I hoep it
> is a common problem What is the usual suspect in these cases

Its not just xl which suffers the oomkiller.  The usual suspect here is
not having sufficient ram.  Try upping dom0's allocation.

~Andrew

  reply	other threads:[~2014-08-19 10:54 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAAiw7J=RNK7kF_POT56-8x73+xno-0Ys02VZBnoQc6uozYQn3g@mail.gmail.com>
2014-08-19  5:11 ` How to dump vcpu regs when a domain is killed during xl create manish jaggi
2014-08-19  9:01   ` Andrew Cooper
2014-08-19 10:02     ` manish jaggi
2014-08-19 10:05       ` Andrew Cooper
2014-08-19 10:51         ` manish jaggi
2014-08-19 10:54           ` Andrew Cooper [this message]
2014-08-19 11:39             ` manish jaggi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53F32CFB.1030606@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=manish.jaggi@caviumnetworks.com \
    --cc=manishjaggi.oss@gmail.com \
    --cc=vijay.kilari@gmail.com \
    --cc=xen-devel@lists.xenproject.org \
    --cc=xen-users@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).