xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Re: How to dump vcpu regs when a domain is killed during xl create
       [not found] <CAAiw7J=RNK7kF_POT56-8x73+xno-0Ys02VZBnoQc6uozYQn3g@mail.gmail.com>
@ 2014-08-19  5:11 ` manish jaggi
  2014-08-19  9:01   ` Andrew Cooper
  0 siblings, 1 reply; 7+ messages in thread
From: manish jaggi @ 2014-08-19  5:11 UTC (permalink / raw)
  To: xen-users, Vijay Kilari, manish.jaggi, xen-devel

Adding the question on xen-devel

On 18 August 2014 14:08, manish jaggi <manishjaggi.oss@gmail.com> wrote:
> I tried to start a domain using xl create, which showed some xen logs
> and then hanged for a minute or so and then displayed a messgae
> killed. Below is the log
>
> linux:~ # xl create domU.cfg
> Parsing config from domU.cfg
> (XEN)  ....
> Killed
>
> Is there a way to know why it was killed and dump core regs (vcpu
> regs) at the point it was killed.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: How to dump vcpu regs when a domain is killed during xl create
  2014-08-19  5:11 ` How to dump vcpu regs when a domain is killed during xl create manish jaggi
@ 2014-08-19  9:01   ` Andrew Cooper
  2014-08-19 10:02     ` manish jaggi
  0 siblings, 1 reply; 7+ messages in thread
From: Andrew Cooper @ 2014-08-19  9:01 UTC (permalink / raw)
  To: manish jaggi, xen-users, Vijay Kilari, manish.jaggi, xen-devel

On 19/08/14 06:11, manish jaggi wrote:
> Adding the question on xen-devel
>
> On 18 August 2014 14:08, manish jaggi <manishjaggi.oss@gmail.com> wrote:
>> I tried to start a domain using xl create, which showed some xen logs
>> and then hanged for a minute or so and then displayed a messgae
>> killed. Below is the log
>>
>> linux:~ # xl create domU.cfg
>> Parsing config from domU.cfg
>> (XEN)  ....
>> Killed
>>
>> Is there a way to know why it was killed and dump core regs (vcpu
>> regs) at the point it was killed.

There is no guarantee the domain has successfully started.

Put "loglvl=all guest_loglvl=all" on the Xen command line, reboot, and
attach the results of

xl -vvvv create domU.cfg

and xl dmesg after the domain has failed in this way.

~Andrew

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: How to dump vcpu regs when a domain is killed during xl create
  2014-08-19  9:01   ` Andrew Cooper
@ 2014-08-19 10:02     ` manish jaggi
  2014-08-19 10:05       ` Andrew Cooper
  0 siblings, 1 reply; 7+ messages in thread
From: manish jaggi @ 2014-08-19 10:02 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-users, manish.jaggi, Vijay Kilari, xen-devel

On 19 August 2014 14:31, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 19/08/14 06:11, manish jaggi wrote:
>> Adding the question on xen-devel
>>
>> On 18 August 2014 14:08, manish jaggi <manishjaggi.oss@gmail.com> wrote:
>>> I tried to start a domain using xl create, which showed some xen logs
>>> and then hanged for a minute or so and then displayed a messgae
>>> killed. Below is the log
>>>
>>> linux:~ # xl create domU.cfg
>>> Parsing config from domU.cfg
>>> (XEN)  ....
>>> Killed
>>>
>>> Is there a way to know why it was killed and dump core regs (vcpu
>>> regs) at the point it was killed.
>
> There is no guarantee the domain has successfully started.
>
> Put "loglvl=all guest_loglvl=all" on the Xen command line, reboot, and
> attach the results of
>
> xl -vvvv create domU.cfg
>
> and xl dmesg after the domain has failed in this way.
>
> ~Andrew

Below is the log
linux:~ # xl -vvv create domU.cfg -d -p
Parsing config from domU.cfg
{
    "domid": null,
    "config": {
        "c_info": {
            "type": "pv",
            "name": "guest",
            "uuid": "e5cb14f1-085d-4511-bca5-d6e3a0c35672",
            "run_hotplug_scripts": "True"
        },
        "b_info": {
            "max_vcpus": 1,
            "avail_vcpus": [
                0
            ],
            "max_memkb": 262144,
            "target_memkb": 262144,
            "shadow_memkb": 3072,
            "sched_params": {

            },
            "claim_mode": "True",
            "type.pv": {
                "kernel": "/root/Image",
                "cmdline": "console=hvc0 root=/dev/xvda ro"
            }
        },
        "disks": [
            {
                "pdev_path": "/dev/loop0",
                "vdev": "xvda",
                "format": "raw",
                "readwrite": 1
            }
        ],
        "on_reboot": "restart"
    }
}

libxl: verbose:
libxl_create.c:134:libxl__domain_build_info_setdefault: qemu-xen is
unavailable, use qemu-xen-traditional instead: No such file or
directory
libxl: debug: libxl_create.c:1401:do_domain_create: ao 0xd297c20:
create: how=(nil) callback=(nil) poller=0xd293d90
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:280:libxl__device_disk_set_backend: Disk
vdev=xvda, using backend phy
libxl: debug: libxl_create.c:851:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:329:libxl__bootloader_run: no
bootloader configured, using user supplied kernel
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch
w=0xd294648: deregister unregistered
domainbuilder: detail: xc_dom_allocate: cmdline="console=hvc0
root=/dev/xvda ro", features="(null)"
libxl: debug: libxl_dom.c:410:libxl__build_pv: pv kernel mapped 0 path
/root/Image
domainbuilder: detail: xc_dom_kernel_file: filename="/root/Image"
domainbuilder: detail: xc_dom_malloc_filemap    : 8354 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.5, caps
xen-3.0-aarch64 xen-3.0-armv7l
domainbuilder: detail: xc_dom_rambase_init: RAM starts at 40000
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64)
loader ...
domainbuilder: detail: loader probe OK
domainbuilder: detail: xc_dom_parse_zimage64_kernel: called
domainbuilder: detail: xc_dom_parse_zimage64_kernel: xen-3.0-aarch64:
0x40080000 -> 0x408a8b50
libxl: debug: libxl_arm.c:474:libxl__arch_domain_init_hw_description:
constructing DTB for Xen version 4.5 guest
libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
node /memory@40000000
libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
node /memory@200000000
libxl: debug: libxl_arm.c:539:libxl__arch_domain_init_hw_description:
fdt total size 1218
domainbuilder: detail: xc_dom_devicetree_mem: called
domainbuilder: detail: xc_dom_mem_init: mem 256 MB, pages 0x10000 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x10000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: set_mode: guest xen-3.0-aarch64, address size 6
domainbuilder: detail: xc_dom_malloc            : 512 kB
domainbuilder: detail: populate_guest_memory: populating RAM @
0000000040000000-0000000050000000 (256MB)
domainbuilder: detail: populate_one_size: populated 0x80/0x80 entries
with shift 9
domainbuilder: detail: arch_setup_meminit: placing boot modules at 0x48000000
domainbuilder: detail: arch_setup_meminit: devicetree: 0x48000000 -> 0x48001000
libxl: debug: libxl_arm.c:570:finalise_one_memory_node: Populating
placeholder node /memory@40000000
libxl: debug: libxl_arm.c:564:finalise_one_memory_node: Nopping out
placeholder node /memory@200000000
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_alloc_segment:   kernel       :
0x40080000 -> 0x408a9000  (pfn 0x40080 + 0x829 pages)
Killed

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: How to dump vcpu regs when a domain is killed during xl create
  2014-08-19 10:02     ` manish jaggi
@ 2014-08-19 10:05       ` Andrew Cooper
  2014-08-19 10:51         ` manish jaggi
  0 siblings, 1 reply; 7+ messages in thread
From: Andrew Cooper @ 2014-08-19 10:05 UTC (permalink / raw)
  To: manish jaggi; +Cc: xen-users, manish.jaggi, Vijay Kilari, xen-devel

On 19/08/14 11:02, manish jaggi wrote:
> On 19 August 2014 14:31, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 19/08/14 06:11, manish jaggi wrote:
>>> Adding the question on xen-devel
>>>
>>> On 18 August 2014 14:08, manish jaggi <manishjaggi.oss@gmail.com> wrote:
>>>> I tried to start a domain using xl create, which showed some xen logs
>>>> and then hanged for a minute or so and then displayed a messgae
>>>> killed. Below is the log
>>>>
>>>> linux:~ # xl create domU.cfg
>>>> Parsing config from domU.cfg
>>>> (XEN)  ....
>>>> Killed
>>>>
>>>> Is there a way to know why it was killed and dump core regs (vcpu
>>>> regs) at the point it was killed.
>> There is no guarantee the domain has successfully started.
>>
>> Put "loglvl=all guest_loglvl=all" on the Xen command line, reboot, and
>> attach the results of
>>
>> xl -vvvv create domU.cfg
>>
>> and xl dmesg after the domain has failed in this way.
>>
>> ~Andrew
> Below is the log
> linux:~ # xl -vvv create domU.cfg -d -p
> Parsing config from domU.cfg
> {
>     "domid": null,
>     "config": {
>         "c_info": {
>             "type": "pv",
>             "name": "guest",
>             "uuid": "e5cb14f1-085d-4511-bca5-d6e3a0c35672",
>             "run_hotplug_scripts": "True"
>         },
>         "b_info": {
>             "max_vcpus": 1,
>             "avail_vcpus": [
>                 0
>             ],
>             "max_memkb": 262144,
>             "target_memkb": 262144,
>             "shadow_memkb": 3072,
>             "sched_params": {
>
>             },
>             "claim_mode": "True",
>             "type.pv": {
>                 "kernel": "/root/Image",
>                 "cmdline": "console=hvc0 root=/dev/xvda ro"
>             }
>         },
>         "disks": [
>             {
>                 "pdev_path": "/dev/loop0",
>                 "vdev": "xvda",
>                 "format": "raw",
>                 "readwrite": 1
>             }
>         ],
>         "on_reboot": "restart"
>     }
> }
>
> libxl: verbose:
> libxl_create.c:134:libxl__domain_build_info_setdefault: qemu-xen is
> unavailable, use qemu-xen-traditional instead: No such file or
> directory
> libxl: debug: libxl_create.c:1401:do_domain_create: ao 0xd297c20:
> create: how=(nil) callback=(nil) poller=0xd293d90
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
> vdev=xvda spec.backend=unknown
> libxl: debug: libxl_device.c:280:libxl__device_disk_set_backend: Disk
> vdev=xvda, using backend phy
> libxl: debug: libxl_create.c:851:initiate_domain_create: running bootloader
> libxl: debug: libxl_bootloader.c:329:libxl__bootloader_run: no
> bootloader configured, using user supplied kernel
> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch
> w=0xd294648: deregister unregistered
> domainbuilder: detail: xc_dom_allocate: cmdline="console=hvc0
> root=/dev/xvda ro", features="(null)"
> libxl: debug: libxl_dom.c:410:libxl__build_pv: pv kernel mapped 0 path
> /root/Image
> domainbuilder: detail: xc_dom_kernel_file: filename="/root/Image"
> domainbuilder: detail: xc_dom_malloc_filemap    : 8354 kB
> domainbuilder: detail: xc_dom_boot_xen_init: ver 4.5, caps
> xen-3.0-aarch64 xen-3.0-armv7l
> domainbuilder: detail: xc_dom_rambase_init: RAM starts at 40000
> domainbuilder: detail: xc_dom_parse_image: called
> domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
> domainbuilder: detail: loader probe failed
> domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64)
> loader ...
> domainbuilder: detail: loader probe OK
> domainbuilder: detail: xc_dom_parse_zimage64_kernel: called
> domainbuilder: detail: xc_dom_parse_zimage64_kernel: xen-3.0-aarch64:
> 0x40080000 -> 0x408a8b50
> libxl: debug: libxl_arm.c:474:libxl__arch_domain_init_hw_description:
> constructing DTB for Xen version 4.5 guest
> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
> node /memory@40000000
> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
> node /memory@200000000
> libxl: debug: libxl_arm.c:539:libxl__arch_domain_init_hw_description:
> fdt total size 1218
> domainbuilder: detail: xc_dom_devicetree_mem: called
> domainbuilder: detail: xc_dom_mem_init: mem 256 MB, pages 0x10000 pages, 4k each
> domainbuilder: detail: xc_dom_mem_init: 0x10000 pages
> domainbuilder: detail: xc_dom_boot_mem_init: called
> domainbuilder: detail: set_mode: guest xen-3.0-aarch64, address size 6
> domainbuilder: detail: xc_dom_malloc            : 512 kB
> domainbuilder: detail: populate_guest_memory: populating RAM @
> 0000000040000000-0000000050000000 (256MB)
> domainbuilder: detail: populate_one_size: populated 0x80/0x80 entries
> with shift 9
> domainbuilder: detail: arch_setup_meminit: placing boot modules at 0x48000000
> domainbuilder: detail: arch_setup_meminit: devicetree: 0x48000000 -> 0x48001000
> libxl: debug: libxl_arm.c:570:finalise_one_memory_node: Populating
> placeholder node /memory@40000000
> libxl: debug: libxl_arm.c:564:finalise_one_memory_node: Nopping out
> placeholder node /memory@200000000
> domainbuilder: detail: xc_dom_build_image: called
> domainbuilder: detail: xc_dom_alloc_segment:   kernel       :
> 0x40080000 -> 0x408a9000  (pfn 0x40080 + 0x829 pages)
> Killed

That is only half of the items I asked for, but this indicates that
something in dom0 killed the domain builder while it was constructing
the domain.

Try consulting dom0's dmesg.

~Andrew

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: How to dump vcpu regs when a domain is killed during xl create
  2014-08-19 10:05       ` Andrew Cooper
@ 2014-08-19 10:51         ` manish jaggi
  2014-08-19 10:54           ` Andrew Cooper
  0 siblings, 1 reply; 7+ messages in thread
From: manish jaggi @ 2014-08-19 10:51 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-users, manish.jaggi, Vijay Kilari, xen-devel

On 19 August 2014 15:35, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 19/08/14 11:02, manish jaggi wrote:
>> On 19 August 2014 14:31, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> On 19/08/14 06:11, manish jaggi wrote:
>>>> Adding the question on xen-devel
>>>>
>>>> On 18 August 2014 14:08, manish jaggi <manishjaggi.oss@gmail.com> wrote:
>>>>> I tried to start a domain using xl create, which showed some xen logs
>>>>> and then hanged for a minute or so and then displayed a messgae
>>>>> killed. Below is the log
>>>>>
>>>>> linux:~ # xl create domU.cfg
>>>>> Parsing config from domU.cfg
>>>>> (XEN)  ....
>>>>> Killed
>>>>>
>>>>> Is there a way to know why it was killed and dump core regs (vcpu
>>>>> regs) at the point it was killed.
>>> There is no guarantee the domain has successfully started.
>>>
>>> Put "loglvl=all guest_loglvl=all" on the Xen command line, reboot, and
>>> attach the results of
>>>
>>> xl -vvvv create domU.cfg
>>>
>>> and xl dmesg after the domain has failed in this way.
>>>
>>> ~Andrew
>> Below is the log
>> linux:~ # xl -vvv create domU.cfg -d -p
>> Parsing config from domU.cfg
>> {
>>     "domid": null,
>>     "config": {
>>         "c_info": {
>>             "type": "pv",
>>             "name": "guest",
>>             "uuid": "e5cb14f1-085d-4511-bca5-d6e3a0c35672",
>>             "run_hotplug_scripts": "True"
>>         },
>>         "b_info": {
>>             "max_vcpus": 1,
>>             "avail_vcpus": [
>>                 0
>>             ],
>>             "max_memkb": 262144,
>>             "target_memkb": 262144,
>>             "shadow_memkb": 3072,
>>             "sched_params": {
>>
>>             },
>>             "claim_mode": "True",
>>             "type.pv": {
>>                 "kernel": "/root/Image",
>>                 "cmdline": "console=hvc0 root=/dev/xvda ro"
>>             }
>>         },
>>         "disks": [
>>             {
>>                 "pdev_path": "/dev/loop0",
>>                 "vdev": "xvda",
>>                 "format": "raw",
>>                 "readwrite": 1
>>             }
>>         ],
>>         "on_reboot": "restart"
>>     }
>> }
>>
>> libxl: verbose:
>> libxl_create.c:134:libxl__domain_build_info_setdefault: qemu-xen is
>> unavailable, use qemu-xen-traditional instead: No such file or
>> directory
>> libxl: debug: libxl_create.c:1401:do_domain_create: ao 0xd297c20:
>> create: how=(nil) callback=(nil) poller=0xd293d90
>> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
>> vdev=xvda spec.backend=unknown
>> libxl: debug: libxl_device.c:280:libxl__device_disk_set_backend: Disk
>> vdev=xvda, using backend phy
>> libxl: debug: libxl_create.c:851:initiate_domain_create: running bootloader
>> libxl: debug: libxl_bootloader.c:329:libxl__bootloader_run: no
>> bootloader configured, using user supplied kernel
>> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch
>> w=0xd294648: deregister unregistered
>> domainbuilder: detail: xc_dom_allocate: cmdline="console=hvc0
>> root=/dev/xvda ro", features="(null)"
>> libxl: debug: libxl_dom.c:410:libxl__build_pv: pv kernel mapped 0 path
>> /root/Image
>> domainbuilder: detail: xc_dom_kernel_file: filename="/root/Image"
>> domainbuilder: detail: xc_dom_malloc_filemap    : 8354 kB
>> domainbuilder: detail: xc_dom_boot_xen_init: ver 4.5, caps
>> xen-3.0-aarch64 xen-3.0-armv7l
>> domainbuilder: detail: xc_dom_rambase_init: RAM starts at 40000
>> domainbuilder: detail: xc_dom_parse_image: called
>> domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
>> domainbuilder: detail: loader probe failed
>> domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64)
>> loader ...
>> domainbuilder: detail: loader probe OK
>> domainbuilder: detail: xc_dom_parse_zimage64_kernel: called
>> domainbuilder: detail: xc_dom_parse_zimage64_kernel: xen-3.0-aarch64:
>> 0x40080000 -> 0x408a8b50
>> libxl: debug: libxl_arm.c:474:libxl__arch_domain_init_hw_description:
>> constructing DTB for Xen version 4.5 guest
>> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
>> node /memory@40000000
>> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
>> node /memory@200000000
>> libxl: debug: libxl_arm.c:539:libxl__arch_domain_init_hw_description:
>> fdt total size 1218
>> domainbuilder: detail: xc_dom_devicetree_mem: called
>> domainbuilder: detail: xc_dom_mem_init: mem 256 MB, pages 0x10000 pages, 4k each
>> domainbuilder: detail: xc_dom_mem_init: 0x10000 pages
>> domainbuilder: detail: xc_dom_boot_mem_init: called
>> domainbuilder: detail: set_mode: guest xen-3.0-aarch64, address size 6
>> domainbuilder: detail: xc_dom_malloc            : 512 kB
>> domainbuilder: detail: populate_guest_memory: populating RAM @
>> 0000000040000000-0000000050000000 (256MB)
>> domainbuilder: detail: populate_one_size: populated 0x80/0x80 entries
>> with shift 9
>> domainbuilder: detail: arch_setup_meminit: placing boot modules at 0x48000000
>> domainbuilder: detail: arch_setup_meminit: devicetree: 0x48000000 -> 0x48001000
>> libxl: debug: libxl_arm.c:570:finalise_one_memory_node: Populating
>> placeholder node /memory@40000000
>> libxl: debug: libxl_arm.c:564:finalise_one_memory_node: Nopping out
>> placeholder node /memory@200000000
>> domainbuilder: detail: xc_dom_build_image: called
>> domainbuilder: detail: xc_dom_alloc_segment:   kernel       :
>> 0x40080000 -> 0x408a9000  (pfn 0x40080 + 0x829 pages)
>> Killed
>
> That is only half of the items I asked for, but this indicates that
> something in dom0 killed the domain builder while it was constructing
> the domain.
>
> Try consulting dom0's dmesg.
>
> ~Andrew

It appears the xl is killed by OOM killer. Her is the log. I hoep it
is a common problem What is the usual suspect in these cases

linux:~ # dmesg
[ 1655.418978] [sched_delayed] sched: RT throttling activated
[ 1658.558697] xl invoked oom-killer: gfp_mask=0x200d0, order=0, oom_score_adj=0
[ 1658.559752] xl cpuset=/ mems_allowed=0
[ 1658.560069] CPU: 0 PID: 521 Comm: xl Not tainted 3.14.0+ #12
[ 1658.560208] Call trace:
[ 1658.560583] [<ffffffc000087e58>] dump_backtrace+0x0/0x128
[ 1658.560927] [<ffffffc000087f90>] show_stack+0x10/0x20
[ 1658.561259] [<ffffffc0005ace3c>] dump_stack+0x74/0x94
[ 1658.561586] [<ffffffc0005a9408>] dump_header.isra.12+0x7c/0x1a8
[ 1658.561964] [<ffffffc00014aa38>] oom_kill_process+0x288/0x410
[ 1658.562319] [<ffffffc00014b040>] out_of_memory+0x278/0x2c8
[ 1658.562656] [<ffffffc00014f2f8>] __alloc_pages_nodemask+0x7d8/0x7f0
[ 1658.562991] [<ffffffc0003d5094>] decrease_reservation+0xd4/0x1e8
[ 1658.563319] [<ffffffc0003d5604>] alloc_xenballooned_pages+0x7c/0x100
[ 1658.563685] [<ffffffc0003e9884>] privcmd_ioctl_mmap_batch+0x37c/0x420
[ 1658.564031] [<ffffffc0003e9b50>] privcmd_ioctl+0x228/0x2a8
[ 1658.564342] [<ffffffc0001a9e48>] do_vfs_ioctl+0x88/0x5c0
[ 1658.564649] [<ffffffc0001aa408>] SyS_ioctl+0x88/0xa0
[ 1658.564778] Mem-Info:
[ 1658.564916] DMA32 per-cpu:
[ 1658.565146] CPU    0: hi:    6, btch:   1 usd:   0
[ 1658.574718] active_anon:2322 inactive_anon:122 isolated_anon:0
 active_file:2 inactive_file:9 isolated_file:0
 unevictable:5 dirty:0 writeback:0 unstable:0
 free:202 slab_reclaimable:1009 slab_unreclaimable:2401
 mapped:153 shmem:227 pagetables:129 bounce:0
 free_cma:0
[ 1658.581623] DMA32 free:808kB min:808kB low:1008kB high:1212kB
active_anon:9288kB inactive_anon:488kB active_file:8kB
inactive_file:36kB unevictable:20kB isolated(anon):0kB
isolated(file):0kB present:131072kB managed:41408kB mlocked:20kB
dirty:0kB writeback:0kB mapped:612kB shmem:908kB
slab_reclaimable:4036kB slab_unreclaimable:9604kB kernel_stack:1216kB
pagetables:516kB unstable:0kB bounce:0kB free_cma:0kB
writeback_tmp:0kB pages_scanned:75 all_unreclaimable? yes
[ 1658.581888] lowmem_reserve[]: 0 0 0
[ 1658.582346] DMA32: 0*4kB 1*8kB (R) 0*16kB 1*32kB (R) 0*64kB 0*128kB
1*256kB (R) 1*512kB (R) 0*1024kB 0*2048kB 0*4096kB = 808kB
[ 1658.584328] Node 0 hugepages_total=0 hugepages_free=0
hugepages_surp=0 hugepages_size=2048kB
[ 1658.584528] 240 total pagecache pages
[ 1658.584700] 0 pages in swap cache
[ 1658.584910] Swap cache stats: add 0, delete 0, find 0/0
[ 1658.585060] Free swap  = 0kB
[ 1658.585203] Total swap = 0kB
[ 1658.585358] 32768 pages RAM
[ 1658.585515] 0 pages HighMem/MovableOnly
[ 1658.585666] 22416 pages reserved
[ 1658.585835] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents
oom_score_adj name
[ 1658.587424] [   78]     0    78    24921     2265      22        0
           0 systemd-journal
[ 1658.587843] [   92]     0    92     2651      128       8        0
       -1000 systemd-udevd
[ 1658.588259] [  159]   499   159     1083       90       6        0
        -900 dbus-daemon
[ 1658.588670] [  162]     0   162     1708       87       6        0
           0 wpa_supplicant
[ 1658.589076] [  163]     0   163     1116       74       6        0
           0 systemd-logind
[ 1658.589485] [  181]     0   181    58426      164      12        0
           0 rsyslogd
[ 1658.589873] [  380]     0   380     1160      159       6        0
           0 cron
[ 1658.590270] [  385]     0   385      594       36       5        0
           0 agetty
[ 1658.590671] [  387]     0   387     1583      122       7        0
           0 login
[ 1658.591067] [  390]     0   390     2095      149       9        0
       -1000 sshd
[ 1658.592297] [  399]     0   399     1398      100       6        0
           0 systemd
[ 1658.592732] [  401]     0   401     2072      293       7        0
           0 (sd-pam)
[ 1658.593153] [  442]     0   442     1474      214       6        0
           0 bash
[ 1658.593561] [  506]     0   506      632       78       5        0
           0 xenstored
[ 1658.593961] [  513]     0   513    17068       37       6        0
           0 xenconsoled
[ 1658.594350] [  521]     0   521     5293      247       6        0
           0 xl
[ 1658.594629] Out of memory: Kill process 78 (systemd-journal) score
214 or sacrifice child
[ 1658.594944] Killed process 78 (systemd-journal) total-vm:99684kB,
anon-rss:332kB, file-rss:8728kB
[ 1660.045289] in:imklog invoked oom-killer: gfp_mask=0x201da,
order=0, oom_score_adj=0
[ 1660.045555] in:imklog cpuset=/ mems_allowed=0
[ 1660.047255] CPU: 0 PID: 185 Comm: in:imklog Not tainted 3.14.0+ #12
[ 1660.047405] Call trace:
[ 1660.047771] [<ffffffc000087e58>] dump_backtrace+0x0/0x128
[ 1660.048110] [<ffffffc000087f90>] show_stack+0x10/0x20
[ 1660.048458] [<ffffffc0005ace3c>] dump_stack+0x74/0x94
[ 1660.048793] [<ffffffc0005a9408>] dump_header.isra.12+0x7c/0x1a8
[ 1660.049171] [<ffffffc00014aa38>] oom_kill_process+0x288/0x410
[ 1660.049536] [<ffffffc00014b040>] out_of_memory+0x278/0x2c8
[ 1660.049861] [<ffffffc00014f2f8>] __alloc_pages_nodemask+0x7d8/0x7f0
[ 1660.051001] [<ffffffc000149018>] filemap_fault+0x180/0x3c8
[ 1660.051371] [<ffffffc000169dac>] __do_fault+0x6c/0x528
[ 1660.051688] [<ffffffc00016e1d4>] handle_mm_fault+0x164/0xc20
[ 1660.052040] [<ffffffc000091688>] do_page_fault+0x258/0x3a8
[ 1660.052358] [<ffffffc000081100>] do_mem_abort+0x38/0xa0
[ 1660.052631] Exception stack(0xffffffc0009b7e30 to 0xffffffc0009b7f50)
[ 1660.053042] 7e20:                                     99cd7080
0000007f 99cd6f98 0000007f
[ 1660.053625] 7e40: ffffffff ffffffff 9acfa86c 0000007f ffffffff
ffffffff 9afe8058 0000007f
[ 1660.054206] 7e60: 009b7e80 ffffffc0 00199064 ffffffc0 019d9443
ffffffc0 00199058 ffffffc0
[ 1660.054774] 7e80: 99cd7020 0000007f 000841ec ffffffc0 00000000
00000000 99cd70b2 0000007f
[ 1660.055342] 7ea0: ffffffff ffffffff 000000e2 00000000 00001fa0
00000000 99cd7080 0000007f
[ 1660.056715] 7ec0: 99cd7020 0000007f 00084298 ffffffc0 00000068
00000000 99cf8890 0000007f
[ 1660.059130] 7ee0: 00000034 00000000 0000000a 00000000 39365f6d
b33b34ff 303d6a64 3e363c0a
[ 1660.059723] 7f00: 9af091b8 0000007f cfc2959b c1c9c3f5 0a0a0a0a
0a0a0a0a 99cd6fd0 0000007f
[ 1660.060308] 7f20: 7f7f7f7f 7f7f7f7f 01010101 01010101 00000010
00000000 fffffe09 ffffffff
[ 1660.060706] 7f40: 0000000d 00000000 ffffffed ffffffff
[ 1660.060842] Mem-Info:
[ 1660.060980] DMA32 per-cpu:
[ 1660.061217] CPU    0: hi:    6, btch:   1 usd:   0
[ 1660.061918] active_anon:2308 inactive_anon:55 isolated_anon:0
 active_file:7 inactive_file:4 isolated_file:0
 unevictable:5 dirty:1 writeback:0 unstable:0
 free:202 slab_reclaimable:994 slab_unreclaimable:2401
 mapped:2 shmem:227 pagetables:109 bounce:0
 free_cma:0
[ 1660.063208] DMA32 free:808kB min:808kB low:1008kB high:1212kB
active_anon:9232kB inactive_anon:220kB active_file:28kB
inactive_file:16kB unevictable:20kB isolated(anon):0kB
isolated(file):0kB present:131072kB managed:41408kB mlocked:20kB
dirty:4kB writeback:0kB mapped:8kB shmem:908kB slab_reclaimable:3976kB
slab_unreclaimable:9604kB kernel_stack:1232kB pagetables:436kB
unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
pages_scanned:67 all_unreclaimable? yes
[ 1660.063540] lowmem_reserve[]: 0 0 0
[ 1660.063997] DMA32: 0*4kB 1*8kB (R) 0*16kB 1*32kB (R) 0*64kB 0*128kB
1*256kB (R) 1*512kB (R) 0*1024kB 0*2048kB 0*4096kB = 808kB
[ 1660.069506] Node 0 hugepages_total=0 hugepages_free=0
hugepages_surp=0 hugepages_size=2048kB
[ 1660.071356] 239 total pagecache pages
[ 1660.071537] 0 pages in swap cache
[ 1660.071766] Swap cache stats: add 0, delete 0, find 0/0
[ 1660.071924] Free swap  = 0kB
[ 1660.072063] Total swap = 0kB
[ 1660.072209] 32768 pages RAM
[ 1660.072355] 0 pages HighMem/MovableOnly
[ 1660.072521] 22416 pages reserved
[ 1660.072687] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents
oom_score_adj name
[ 1660.073216] [   92]     0    92     2651      128       8        0
       -1000 systemd-udevd
[ 1660.073643] [  159]   499   159     1083       90       6        0
        -900 dbus-daemon
[ 1660.074045] [  162]     0   162     1708       87       6        0
           0 wpa_supplicant
[ 1660.074457] [  163]     0   163     1116       74       6        0
           0 systemd-logind
[ 1660.074875] [  181]     0   181    58426      164      12        0
           0 rsyslogd
[ 1660.075264] [  380]     0   380     1160      159       6        0
           0 cron
[ 1660.075662] [  385]     0   385      594       36       5        0
           0 agetty
[ 1660.080081] [  387]     0   387     1583      122       7        0
           0 login
[ 1660.082861] [  390]     0   390     2095      149       9        0
       -1000 sshd
[ 1660.083280] [  399]     0   399     1398      100       6        0
           0 systemd
[ 1660.083692] [  401]     0   401     2072      293       7        0
           0 (sd-pam)
[ 1660.084119] [  442]     0   442     1474      214       6        0
           0 bash
[ 1660.084541] [  506]     0   506      632       78       5        0
           0 xenstored
[ 1660.084947] [  513]     0   513    17068       37       6        0
           0 xenconsoled
[ 1660.085341] [  521]     0   521     5293      247       6        0
           0 xl
[ 1660.085740] [  522]     0   522       43        1       2        0
           0 systemd-cgroups
[ 1660.088994] Out of memory: Kill process 401 ((sd-pam)) score 28 or
sacrifice child
[ 1660.091001] Killed process 401 ((sd-pam)) total-vm:8288kB,
anon-rss:1172kB, file-rss:0kB
[ 1661.908299] xl invoked oom-killer: gfp_mask=0x200d0, order=0, oom_score_adj=0
[ 1661.908600] xl cpuset=/ mems_allowed=0
[ 1661.908923] CPU: 0 PID: 521 Comm: xl Not tainted 3.14.0+ #12
[ 1661.909068] Call trace:
[ 1661.910343] [<ffffffc000087e58>] dump_backtrace+0x0/0x128
[ 1661.910716] [<ffffffc000087f90>] show_stack+0x10/0x20
[ 1661.911071] [<ffffffc0005ace3c>] dump_stack+0x74/0x94
[ 1661.911418] [<ffffffc0005a9408>] dump_header.isra.12+0x7c/0x1a8
[ 1661.911813] [<ffffffc00014aa38>] oom_kill_process+0x288/0x410
[ 1661.912185] [<ffffffc00014b040>] out_of_memory+0x278/0x2c8
[ 1661.912547] [<ffffffc00014f2f8>] __alloc_pages_nodemask+0x7d8/0x7f0
[ 1661.912906] [<ffffffc0003d5094>] decrease_reservation+0xd4/0x1e8
[ 1661.913256] [<ffffffc0003d5604>] alloc_xenballooned_pages+0x7c/0x100
[ 1661.913642] [<ffffffc0003e9884>] privcmd_ioctl_mmap_batch+0x37c/0x420
[ 1661.914010] [<ffffffc0003e9b50>] privcmd_ioctl+0x228/0x2a8
[ 1661.914343] [<ffffffc0001a9e48>] do_vfs_ioctl+0x88/0x5c0
[ 1661.914656] [<ffffffc0001aa408>] SyS_ioctl+0x88/0xa0
[ 1661.914792] Mem-Info:
[ 1661.914937] DMA32 per-cpu:
[ 1661.915178] CPU    0: hi:    6, btch:   1 usd:   0
[ 1661.915907] active_anon:2155 inactive_anon:55 isolated_anon:0
 active_file:8 inactive_file:3 isolated_file:0
 unevictable:5 dirty:0 writeback:0 unstable:0
 free:201 slab_reclaimable:994 slab_unreclaimable:2399
 mapped:1 shmem:227 pagetables:103 bounce:0
 free_cma:0
[ 1661.922955] DMA32 free:804kB min:808kB low:1008kB high:1212kB
active_anon:8620kB inactive_anon:220kB active_file:32kB
inactive_file:12kB unevictable:20kB isolated(anon):0kB
isolated(file):0kB present:131072kB managed:41408kB mlocked:20kB
dirty:0kB writeback:0kB mapped:4kB shmem:908kB slab_reclaimable:3976kB
slab_unreclaimable:9596kB kernel_stack:1232kB pagetables:412kB
unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
pages_scanned:70 all_unreclaimable? yes
[ 1661.923287] lowmem_reserve[]: 0 0 0
[ 1661.923759] DMA32: 1*4kB (R) 0*8kB 0*16kB 1*32kB (R) 0*64kB 0*128kB
1*256kB (R) 1*512kB (R) 0*1024kB 0*2048kB 0*4096kB = 804kB
[ 1661.925873] Node 0 hugepages_total=0 hugepages_free=0
hugepages_surp=0 hugepages_size=2048kB
[ 1661.929197] 252 total pagecache pages
[ 1661.929379] 0 pages in swap cache
[ 1661.929616] Swap cache stats: add 0, delete 0, find 0/0
[ 1661.929778] Free swap  = 0kB
[ 1661.929918] Total swap = 0kB
[ 1661.930072] 32768 pages RAM
[ 1661.930223] 0 pages HighMem/MovableOnly
[ 1661.930376] 22416 pages reserved
[ 1661.930544] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents
oom_score_adj name
[ 1661.931083] [   92]     0    92     2651      128       8        0
       -1000 systemd-udevd
[ 1661.931524] [  159]   499   159     1083       90       6        0
        -900 dbus-daemon
[ 1661.931940] [  162]     0   162     1708       87       6        0
           0 wpa_supplicant
[ 1661.932369] [  163]     0   163     1116       74       6        0
           0 systemd-logind
[ 1661.932818] [  181]     0   181    58426      164      12        0
           0 rsyslogd
[ 1661.933222] [  380]     0   380     1160      159       6        0
           0 cron
[ 1661.933638] [  385]     0   385      594       36       5        0
           0 agetty
[ 1661.934053] [  387]     0   387     1583      122       7        0
           0 login
[ 1661.934476] [  390]     0   390     2095      149       9        0
       -1000 sshd
[ 1661.934896] [  399]     0   399     1398       98       6        0
           0 systemd
[ 1661.935330] [  442]     0   442     1474      214       6        0
           0 bash
[ 1661.935757] [  506]     0   506      632       78       5        0
           0 xenstored
[ 1661.943386] [  513]     0   513    17068       37       6        0
           0 xenconsoled
[ 1661.945470] [  521]     0   521     5293      247       6        0
           0 xl
[ 1661.946601] [  522]     0   522       78        1       3        0
           0 systemd-cgroups
[ 1661.949506] Out of memory: Kill process 521 (xl) score 23 or sacrifice child
[ 1661.949848] Killed process 521 (xl) total-vm:21172kB,
anon-rss:980kB, file-rss:8kB
[ 1670.390637] systemd[1]: systemd-journald.service holdoff time over,
scheduling restart.
[ 1670.421508] systemd[1]: Stopping Journal Service...
[ 1670.454109] systemd[1]: Starting Journal Service...
[ 1670.861545] systemd[1]: Started Journal Service.
[ 1673.842226] systemd-journald[523]: File
/run/log/journal/2f93db49586f4b85b4411279f803b005/system.journal
corrupted or uncleanly shut down, renaming and replacing.
[ 1674.341531] systemd-journald[523]: Vacuuming done, freed 0 bytes
[ 1678.363595] systemd-journald[523]: Received request to flush
runtime journal from PID 1

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: How to dump vcpu regs when a domain is killed during xl create
  2014-08-19 10:51         ` manish jaggi
@ 2014-08-19 10:54           ` Andrew Cooper
  2014-08-19 11:39             ` manish jaggi
  0 siblings, 1 reply; 7+ messages in thread
From: Andrew Cooper @ 2014-08-19 10:54 UTC (permalink / raw)
  To: manish jaggi; +Cc: xen-users, manish.jaggi, Vijay Kilari, xen-devel

On 19/08/14 11:51, manish jaggi wrote:
> On 19 August 2014 15:35, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 19/08/14 11:02, manish jaggi wrote:
>>> On 19 August 2014 14:31, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>>> On 19/08/14 06:11, manish jaggi wrote:
>>>>> Adding the question on xen-devel
>>>>>
>>>>> On 18 August 2014 14:08, manish jaggi <manishjaggi.oss@gmail.com> wrote:
>>>>>> I tried to start a domain using xl create, which showed some xen logs
>>>>>> and then hanged for a minute or so and then displayed a messgae
>>>>>> killed. Below is the log
>>>>>>
>>>>>> linux:~ # xl create domU.cfg
>>>>>> Parsing config from domU.cfg
>>>>>> (XEN)  ....
>>>>>> Killed
>>>>>>
>>>>>> Is there a way to know why it was killed and dump core regs (vcpu
>>>>>> regs) at the point it was killed.
>>>> There is no guarantee the domain has successfully started.
>>>>
>>>> Put "loglvl=all guest_loglvl=all" on the Xen command line, reboot, and
>>>> attach the results of
>>>>
>>>> xl -vvvv create domU.cfg
>>>>
>>>> and xl dmesg after the domain has failed in this way.
>>>>
>>>> ~Andrew
>>> Below is the log
>>> linux:~ # xl -vvv create domU.cfg -d -p
>>> Parsing config from domU.cfg
>>> {
>>>     "domid": null,
>>>     "config": {
>>>         "c_info": {
>>>             "type": "pv",
>>>             "name": "guest",
>>>             "uuid": "e5cb14f1-085d-4511-bca5-d6e3a0c35672",
>>>             "run_hotplug_scripts": "True"
>>>         },
>>>         "b_info": {
>>>             "max_vcpus": 1,
>>>             "avail_vcpus": [
>>>                 0
>>>             ],
>>>             "max_memkb": 262144,
>>>             "target_memkb": 262144,
>>>             "shadow_memkb": 3072,
>>>             "sched_params": {
>>>
>>>             },
>>>             "claim_mode": "True",
>>>             "type.pv": {
>>>                 "kernel": "/root/Image",
>>>                 "cmdline": "console=hvc0 root=/dev/xvda ro"
>>>             }
>>>         },
>>>         "disks": [
>>>             {
>>>                 "pdev_path": "/dev/loop0",
>>>                 "vdev": "xvda",
>>>                 "format": "raw",
>>>                 "readwrite": 1
>>>             }
>>>         ],
>>>         "on_reboot": "restart"
>>>     }
>>> }
>>>
>>> libxl: verbose:
>>> libxl_create.c:134:libxl__domain_build_info_setdefault: qemu-xen is
>>> unavailable, use qemu-xen-traditional instead: No such file or
>>> directory
>>> libxl: debug: libxl_create.c:1401:do_domain_create: ao 0xd297c20:
>>> create: how=(nil) callback=(nil) poller=0xd293d90
>>> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
>>> vdev=xvda spec.backend=unknown
>>> libxl: debug: libxl_device.c:280:libxl__device_disk_set_backend: Disk
>>> vdev=xvda, using backend phy
>>> libxl: debug: libxl_create.c:851:initiate_domain_create: running bootloader
>>> libxl: debug: libxl_bootloader.c:329:libxl__bootloader_run: no
>>> bootloader configured, using user supplied kernel
>>> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch
>>> w=0xd294648: deregister unregistered
>>> domainbuilder: detail: xc_dom_allocate: cmdline="console=hvc0
>>> root=/dev/xvda ro", features="(null)"
>>> libxl: debug: libxl_dom.c:410:libxl__build_pv: pv kernel mapped 0 path
>>> /root/Image
>>> domainbuilder: detail: xc_dom_kernel_file: filename="/root/Image"
>>> domainbuilder: detail: xc_dom_malloc_filemap    : 8354 kB
>>> domainbuilder: detail: xc_dom_boot_xen_init: ver 4.5, caps
>>> xen-3.0-aarch64 xen-3.0-armv7l
>>> domainbuilder: detail: xc_dom_rambase_init: RAM starts at 40000
>>> domainbuilder: detail: xc_dom_parse_image: called
>>> domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
>>> domainbuilder: detail: loader probe failed
>>> domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64)
>>> loader ...
>>> domainbuilder: detail: loader probe OK
>>> domainbuilder: detail: xc_dom_parse_zimage64_kernel: called
>>> domainbuilder: detail: xc_dom_parse_zimage64_kernel: xen-3.0-aarch64:
>>> 0x40080000 -> 0x408a8b50
>>> libxl: debug: libxl_arm.c:474:libxl__arch_domain_init_hw_description:
>>> constructing DTB for Xen version 4.5 guest
>>> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
>>> node /memory@40000000
>>> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
>>> node /memory@200000000
>>> libxl: debug: libxl_arm.c:539:libxl__arch_domain_init_hw_description:
>>> fdt total size 1218
>>> domainbuilder: detail: xc_dom_devicetree_mem: called
>>> domainbuilder: detail: xc_dom_mem_init: mem 256 MB, pages 0x10000 pages, 4k each
>>> domainbuilder: detail: xc_dom_mem_init: 0x10000 pages
>>> domainbuilder: detail: xc_dom_boot_mem_init: called
>>> domainbuilder: detail: set_mode: guest xen-3.0-aarch64, address size 6
>>> domainbuilder: detail: xc_dom_malloc            : 512 kB
>>> domainbuilder: detail: populate_guest_memory: populating RAM @
>>> 0000000040000000-0000000050000000 (256MB)
>>> domainbuilder: detail: populate_one_size: populated 0x80/0x80 entries
>>> with shift 9
>>> domainbuilder: detail: arch_setup_meminit: placing boot modules at 0x48000000
>>> domainbuilder: detail: arch_setup_meminit: devicetree: 0x48000000 -> 0x48001000
>>> libxl: debug: libxl_arm.c:570:finalise_one_memory_node: Populating
>>> placeholder node /memory@40000000
>>> libxl: debug: libxl_arm.c:564:finalise_one_memory_node: Nopping out
>>> placeholder node /memory@200000000
>>> domainbuilder: detail: xc_dom_build_image: called
>>> domainbuilder: detail: xc_dom_alloc_segment:   kernel       :
>>> 0x40080000 -> 0x408a9000  (pfn 0x40080 + 0x829 pages)
>>> Killed
>> That is only half of the items I asked for, but this indicates that
>> something in dom0 killed the domain builder while it was constructing
>> the domain.
>>
>> Try consulting dom0's dmesg.
>>
>> ~Andrew
> It appears the xl is killed by OOM killer. Her is the log. I hoep it
> is a common problem What is the usual suspect in these cases

Its not just xl which suffers the oomkiller.  The usual suspect here is
not having sufficient ram.  Try upping dom0's allocation.

~Andrew

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: How to dump vcpu regs when a domain is killed during xl create
  2014-08-19 10:54           ` Andrew Cooper
@ 2014-08-19 11:39             ` manish jaggi
  0 siblings, 0 replies; 7+ messages in thread
From: manish jaggi @ 2014-08-19 11:39 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-users, manish.jaggi, Vijay Kilari, xen-devel

On 19 August 2014 16:24, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 19/08/14 11:51, manish jaggi wrote:
>> On 19 August 2014 15:35, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> On 19/08/14 11:02, manish jaggi wrote:
>>>> On 19 August 2014 14:31, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>>>> On 19/08/14 06:11, manish jaggi wrote:
>>>>>> Adding the question on xen-devel
>>>>>>
>>>>>> On 18 August 2014 14:08, manish jaggi <manishjaggi.oss@gmail.com> wrote:
>>>>>>> I tried to start a domain using xl create, which showed some xen logs
>>>>>>> and then hanged for a minute or so and then displayed a messgae
>>>>>>> killed. Below is the log
>>>>>>>
>>>>>>> linux:~ # xl create domU.cfg
>>>>>>> Parsing config from domU.cfg
>>>>>>> (XEN)  ....
>>>>>>> Killed
>>>>>>>
>>>>>>> Is there a way to know why it was killed and dump core regs (vcpu
>>>>>>> regs) at the point it was killed.
>>>>> There is no guarantee the domain has successfully started.
>>>>>
>>>>> Put "loglvl=all guest_loglvl=all" on the Xen command line, reboot, and
>>>>> attach the results of
>>>>>
>>>>> xl -vvvv create domU.cfg
>>>>>
>>>>> and xl dmesg after the domain has failed in this way.
>>>>>
>>>>> ~Andrew
>>>> Below is the log
>>>> linux:~ # xl -vvv create domU.cfg -d -p
>>>> Parsing config from domU.cfg
>>>> {
>>>>     "domid": null,
>>>>     "config": {
>>>>         "c_info": {
>>>>             "type": "pv",
>>>>             "name": "guest",
>>>>             "uuid": "e5cb14f1-085d-4511-bca5-d6e3a0c35672",
>>>>             "run_hotplug_scripts": "True"
>>>>         },
>>>>         "b_info": {
>>>>             "max_vcpus": 1,
>>>>             "avail_vcpus": [
>>>>                 0
>>>>             ],
>>>>             "max_memkb": 262144,
>>>>             "target_memkb": 262144,
>>>>             "shadow_memkb": 3072,
>>>>             "sched_params": {
>>>>
>>>>             },
>>>>             "claim_mode": "True",
>>>>             "type.pv": {
>>>>                 "kernel": "/root/Image",
>>>>                 "cmdline": "console=hvc0 root=/dev/xvda ro"
>>>>             }
>>>>         },
>>>>         "disks": [
>>>>             {
>>>>                 "pdev_path": "/dev/loop0",
>>>>                 "vdev": "xvda",
>>>>                 "format": "raw",
>>>>                 "readwrite": 1
>>>>             }
>>>>         ],
>>>>         "on_reboot": "restart"
>>>>     }
>>>> }
>>>>
>>>> libxl: verbose:
>>>> libxl_create.c:134:libxl__domain_build_info_setdefault: qemu-xen is
>>>> unavailable, use qemu-xen-traditional instead: No such file or
>>>> directory
>>>> libxl: debug: libxl_create.c:1401:do_domain_create: ao 0xd297c20:
>>>> create: how=(nil) callback=(nil) poller=0xd293d90
>>>> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
>>>> vdev=xvda spec.backend=unknown
>>>> libxl: debug: libxl_device.c:280:libxl__device_disk_set_backend: Disk
>>>> vdev=xvda, using backend phy
>>>> libxl: debug: libxl_create.c:851:initiate_domain_create: running bootloader
>>>> libxl: debug: libxl_bootloader.c:329:libxl__bootloader_run: no
>>>> bootloader configured, using user supplied kernel
>>>> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch
>>>> w=0xd294648: deregister unregistered
>>>> domainbuilder: detail: xc_dom_allocate: cmdline="console=hvc0
>>>> root=/dev/xvda ro", features="(null)"
>>>> libxl: debug: libxl_dom.c:410:libxl__build_pv: pv kernel mapped 0 path
>>>> /root/Image
>>>> domainbuilder: detail: xc_dom_kernel_file: filename="/root/Image"
>>>> domainbuilder: detail: xc_dom_malloc_filemap    : 8354 kB
>>>> domainbuilder: detail: xc_dom_boot_xen_init: ver 4.5, caps
>>>> xen-3.0-aarch64 xen-3.0-armv7l
>>>> domainbuilder: detail: xc_dom_rambase_init: RAM starts at 40000
>>>> domainbuilder: detail: xc_dom_parse_image: called
>>>> domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
>>>> domainbuilder: detail: loader probe failed
>>>> domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64)
>>>> loader ...
>>>> domainbuilder: detail: loader probe OK
>>>> domainbuilder: detail: xc_dom_parse_zimage64_kernel: called
>>>> domainbuilder: detail: xc_dom_parse_zimage64_kernel: xen-3.0-aarch64:
>>>> 0x40080000 -> 0x408a8b50
>>>> libxl: debug: libxl_arm.c:474:libxl__arch_domain_init_hw_description:
>>>> constructing DTB for Xen version 4.5 guest
>>>> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
>>>> node /memory@40000000
>>>> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
>>>> node /memory@200000000
>>>> libxl: debug: libxl_arm.c:539:libxl__arch_domain_init_hw_description:
>>>> fdt total size 1218
>>>> domainbuilder: detail: xc_dom_devicetree_mem: called
>>>> domainbuilder: detail: xc_dom_mem_init: mem 256 MB, pages 0x10000 pages, 4k each
>>>> domainbuilder: detail: xc_dom_mem_init: 0x10000 pages
>>>> domainbuilder: detail: xc_dom_boot_mem_init: called
>>>> domainbuilder: detail: set_mode: guest xen-3.0-aarch64, address size 6
>>>> domainbuilder: detail: xc_dom_malloc            : 512 kB
>>>> domainbuilder: detail: populate_guest_memory: populating RAM @
>>>> 0000000040000000-0000000050000000 (256MB)
>>>> domainbuilder: detail: populate_one_size: populated 0x80/0x80 entries
>>>> with shift 9
>>>> domainbuilder: detail: arch_setup_meminit: placing boot modules at 0x48000000
>>>> domainbuilder: detail: arch_setup_meminit: devicetree: 0x48000000 -> 0x48001000
>>>> libxl: debug: libxl_arm.c:570:finalise_one_memory_node: Populating
>>>> placeholder node /memory@40000000
>>>> libxl: debug: libxl_arm.c:564:finalise_one_memory_node: Nopping out
>>>> placeholder node /memory@200000000
>>>> domainbuilder: detail: xc_dom_build_image: called
>>>> domainbuilder: detail: xc_dom_alloc_segment:   kernel       :
>>>> 0x40080000 -> 0x408a9000  (pfn 0x40080 + 0x829 pages)
>>>> Killed
>>> That is only half of the items I asked for, but this indicates that
>>> something in dom0 killed the domain builder while it was constructing
>>> the domain.
>>>
>>> Try consulting dom0's dmesg.
>>>
>>> ~Andrew
>> It appears the xl is killed by OOM killer. Her is the log. I hoep it
>> is a common problem What is the usual suspect in these cases
>
> Its not just xl which suffers the oomkiller.  The usual suspect here is
> not having sufficient ram.  Try upping dom0's allocation.
>
> ~Andrew
Thanks It worked, I incresed the allocation and it well passed that step.
But one question, how does xl tools when booting domU result in out of
memory. They are only issueing hypercalls.
They need to copy only kernel image which is ~4mb in our case.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2014-08-19 11:39 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CAAiw7J=RNK7kF_POT56-8x73+xno-0Ys02VZBnoQc6uozYQn3g@mail.gmail.com>
2014-08-19  5:11 ` How to dump vcpu regs when a domain is killed during xl create manish jaggi
2014-08-19  9:01   ` Andrew Cooper
2014-08-19 10:02     ` manish jaggi
2014-08-19 10:05       ` Andrew Cooper
2014-08-19 10:51         ` manish jaggi
2014-08-19 10:54           ` Andrew Cooper
2014-08-19 11:39             ` manish jaggi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).