* Help:Can xen restore several snapshots more faster at same time?
@ 2017-11-06 4:38 HUANG SHENGQIANG
2017-11-06 10:28 ` Wei Liu
0 siblings, 1 reply; 7+ messages in thread
From: HUANG SHENGQIANG @ 2017-11-06 4:38 UTC (permalink / raw)
To: xen-devel@lists.xenproject.org; +Cc: Wangjianbing
[-- Attachment #1.1.1: Type: text/plain, Size: 1146 bytes --]
Dear XEN expert,
I find a blocker issue in my project. Would you please offer us some feedback?
The description from my development team:
we need restore as much as xen snapshot at same times, but we found ‘xl restore’ command is work linearly, if we want to restore a new xen snapshot, we need to wait for the previous snapshot finish it’s work. We try to debug the xl source ,we found the follow information:
[cid:image001.png@01D356F6.B8EE87E0]
When an snapshot is being restore, we can see another process is blocked. We trying to delete the acquire_lock from the source code , then we see all the snapshots are being restore at same time, but restore is still very slow, one snapshot needs about 25 seconds to finish restore(our environment is cpu E52620, 256G memory, SSD hard disk. The snapshot is Win7 OS with 2G memory).
So , does xen have the way to restore more faster when several snapshot is restore at same time? Why KVM can restore several snapshot fast at same time? (We try the same experiment in KVM, we got we can restore about 50+ snapshot in 20S. )
Thanks very much ,hoping your reply
Benny Huang
[-- Attachment #1.1.2: Type: text/html, Size: 4360 bytes --]
[-- Attachment #1.2: image001.png --]
[-- Type: image/png, Size: 10375 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Help:Can xen restore several snapshots more faster at same time?
2017-11-06 4:38 Help:Can xen restore several snapshots more faster at same time? HUANG SHENGQIANG
@ 2017-11-06 10:28 ` Wei Liu
2017-11-13 8:25 ` 答复: " Chenjia (C)
0 siblings, 1 reply; 7+ messages in thread
From: Wei Liu @ 2017-11-06 10:28 UTC (permalink / raw)
To: HUANG SHENGQIANG; +Cc: xen-devel@lists.xenproject.org, Wei Liu, Wangjianbing
On Mon, Nov 06, 2017 at 04:38:51AM +0000, HUANG SHENGQIANG wrote:
> Dear XEN expert,
>
> I find a blocker issue in my project. Would you please offer us some feedback?
>
> The description from my development team:
> we need restore as much as xen snapshot at same times, but we found ‘xl restore’ command is work linearly, if we want to restore a new xen snapshot, we need to wait for the previous snapshot finish it’s work. We try to debug the xl source ,we found the follow information:
> [cid:image001.png@01D356F6.B8EE87E0]
>
Please don't send pictures.
> When an snapshot is being restore, we can see another process is blocked. We trying to delete the acquire_lock from the source code , then we see all the snapshots are being restore at same time, but restore is still very slow, one snapshot needs about 25 seconds to finish restore(our environment is cpu E52620, 256G memory, SSD hard disk. The snapshot is Win7 OS with 2G memory).
>
There is a lock in xl as you can see in the stack trace.
> So , does xen have the way to restore more faster when several snapshot is restore at same time? Why KVM can restore several snapshot fast at same time? (We try the same experiment in KVM, we got we can restore about 50+ snapshot in 20S. )
>
Part of the toolstack needs to be reworked. You can start by removing
the lock in xl to see what breaks.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* 答复: Re: Help:Can xen restore several snapshots more faster at same time?
2017-11-06 10:28 ` Wei Liu
@ 2017-11-13 8:25 ` Chenjia (C)
2017-11-13 9:50 ` Wei Liu
0 siblings, 1 reply; 7+ messages in thread
From: Chenjia (C) @ 2017-11-13 8:25 UTC (permalink / raw)
To: wei.liu2@citrix.com
Cc: HUANG SHENGQIANG, Yaoshaomin, xen-devel@lists.xenproject.org
[-- Attachment #1.1: Type: text/plain, Size: 3405 bytes --]
Dear XEN expert:
For the last question, we try the follow steps:
1) remove the lock acquire_lock in xl_cmdimpl.c, then we find the bottleneck is the IO
2) we move the all the 40 win7 snapshots to the ramdisk(total 40G, 1G per snapshot), then we find when we restore all 40 snapshots at same time , the new bottleneck is the xenstored process, the xenstored process is work with 1 thread, and the cpu which run the thread is always 100%
3) we start the xenstored process with ‘-I’ argument, xenstored process is still very busy.
So , we have 2 questions:
1. is there some way to improve the xenstored process performance?
2. We also try the virsh tool to restore several xen at same time ,we fount the virsh can restore 40+ snapshot(1G per snapshot) at same time, the performance is good when snapshot is all in ramdisk. But we can’t let all the snapshot is always in ramdisk for it is too big. Is there some way to reduce these virsh snapshots space?(these snapshot is same: same OS, same config, but need with different IP address)
Would you please offer us some feedback? Thanks.
By the way, can we talk with Chinese if possible?
发件人: HUANG SHENGQIANG
发送时间: 2017年11月6日 18:32
收件人: Chenjia (C) <chenjia09@huawei.com<mailto:chenjia09@huawei.com>>
主题: FW: Re: [Xen-devel] Help:Can xen restore several snapshots more faster at same time?
--------------------------------------------------
HUANG SHENGQIANG HUANG SHENGQIANG
M: +86-18201587800<tel:+86-18201587800>(优先)/+1-6046180423<tel:+1-6046180423>(海外出差)
E: huang.shengqiang@huawei.com<mailto:huang.shengqiang@huawei.com>
产品与解决方案-交换机与企业网关解决方案架构与设计部
Products & Solutions-Switch & Enterprise Gateway Solution Architecture & Design Dept
From:Wei Liu
To:HUANG SHENGQIANG,
Cc:xen-devel@lists.xenproject.org,Wangjianbing,Wei Liu,
Date:2017-11-06 18:28:58
Subject:Re: [Xen-devel] Help:Can xen restore several snapshots more faster at same time?
On Mon, Nov 06, 2017 at 04:38:51AM +0000, HUANG SHENGQIANG wrote:
> Dear XEN expert,
>
> I find a blocker issue in my project. Would you please offer us some feedback?
>
> The description from my development team:
> we need restore as much as xen snapshot at same times, but we found ‘xl restore’ command is work linearly, if we want to restore a new xen snapshot, we need to wait for the previous snapshot finish it’s work. We try to debug the xl source ,we found the follow information:
> [cid:image001.png@01D356F6.B8EE87E0]
>
Please don't send pictures.
> When an snapshot is being restore, we can see another process is blocked. We trying to delete the acquire_lock from the source code , then we see all the snapshots are being restore at same time, but restore is still very slow, one snapshot needs about 25 seconds to finish restore(our environment is cpu E52620, 256G memory, SSD hard disk. The snapshot is Win7 OS with 2G memory).
>
There is a lock in xl as you can see in the stack trace.
> So , does xen have the way to restore more faster when several snapshot is restore at same time? Why KVM can restore several snapshot fast at same time? (We try the same experiment in KVM, we got we can restore about 50+ snapshot in 20S. )
>
Part of the toolstack needs to be reworked. You can start by removing
the lock in xl to see what breaks.
[-- Attachment #1.2: Type: text/html, Size: 14962 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 答复: Re: Help:Can xen restore several snapshots more faster at same time?
2017-11-13 8:25 ` 答复: " Chenjia (C)
@ 2017-11-13 9:50 ` Wei Liu
2017-11-27 12:35 ` 答复: " Chenjia (C)
0 siblings, 1 reply; 7+ messages in thread
From: Wei Liu @ 2017-11-13 9:50 UTC (permalink / raw)
To: Chenjia (C)
Cc: HUANG SHENGQIANG, Yaoshaomin, wei.liu2@citrix.com,
xen-devel@lists.xenproject.org
Please avoid top-posting.
On Mon, Nov 13, 2017 at 08:25:16AM +0000, Chenjia (C) wrote:
> 1. is there some way to improve the xenstored process performance?
>
The latest version of Cxenstored and Oxenstored have improved
transaction handling. Not sure which version you're using.
> 2. We also try the virsh tool to restore several xen at same
> time ,we fount the virsh can restore 40+ snapshot(1G per snapshot) at
> same time, the performance is good when snapshot is all in ramdisk.
> But we can’t let all the snapshot is always in ramdisk for it is too
> big. Is there some way to reduce these virsh snapshots space?(these
> snapshot is same: same OS, same config, but need with different IP
> address)
For libvirt question you need to ask on libvirt list.
>
> Would you please offer us some feedback? Thanks.
> By the way, can we talk with Chinese if possible?
Sorry but communication on mailing list needs to be in English so other
people can join the discussion and / or provide suggestions.
>
> 发件人: HUANG SHENGQIANG
> 发送时间: 2017年11月6日 18:32
> 收件人: Chenjia (C) <chenjia09@huawei.com<mailto:chenjia09@huawei.com>>
> 主题: FW: Re: [Xen-devel] Help:Can xen restore several snapshots more faster at same time?
>
> --------------------------------------------------
> HUANG SHENGQIANG HUANG SHENGQIANG
> M: +86-18201587800<tel:+86-18201587800>(优先)/+1-6046180423<tel:+1-6046180423>(海外出差)
> E: huang.shengqiang@huawei.com<mailto:huang.shengqiang@huawei.com>
> 产品与解决方案-交换机与企业网关解决方案架构与设计部
> Products & Solutions-Switch & Enterprise Gateway Solution Architecture & Design Dept
> From:Wei Liu
> To:HUANG SHENGQIANG,
> Cc:xen-devel@lists.xenproject.org,Wangjianbing,Wei Liu,
> Date:2017-11-06 18:28:58
> Subject:Re: [Xen-devel] Help:Can xen restore several snapshots more faster at same time?
>
> On Mon, Nov 06, 2017 at 04:38:51AM +0000, HUANG SHENGQIANG wrote:
> > Dear XEN expert,
> >
> > I find a blocker issue in my project. Would you please offer us some feedback?
> >
> > The description from my development team:
> > we need restore as much as xen snapshot at same times, but we found ‘xl restore’ command is work linearly, if we want to restore a new xen snapshot, we need to wait for the previous snapshot finish it’s work. We try to debug the xl source ,we found the follow information:
> > [cid:image001.png@01D356F6.B8EE87E0]
> >
>
> Please don't send pictures.
>
> > When an snapshot is being restore, we can see another process is blocked. We trying to delete the acquire_lock from the source code , then we see all the snapshots are being restore at same time, but restore is still very slow, one snapshot needs about 25 seconds to finish restore(our environment is cpu E52620, 256G memory, SSD hard disk. The snapshot is Win7 OS with 2G memory).
> >
>
> There is a lock in xl as you can see in the stack trace.
>
> > So , does xen have the way to restore more faster when several snapshot is restore at same time? Why KVM can restore several snapshot fast at same time? (We try the same experiment in KVM, we got we can restore about 50+ snapshot in 20S. )
> >
>
> Part of the toolstack needs to be reworked. You can start by removing
> the lock in xl to see what breaks.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* 答复: 答复: Re: Help:Can xen restore several snapshots more faster at same time?
2017-11-13 9:50 ` Wei Liu
@ 2017-11-27 12:35 ` Chenjia (C)
2017-11-27 12:39 ` Wei Liu
0 siblings, 1 reply; 7+ messages in thread
From: Chenjia (C) @ 2017-11-27 12:35 UTC (permalink / raw)
To: Wei Liu; +Cc: xen-devel@lists.xenproject.org
when we use the Xen4.8.7(we compile from the source and install on SUSE 12 ), if we create the VM, we got the error information:
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge online [8284] exited with error status 127
libxl: error: libxl_create.c:1461:domcreate_attach_devices: unable to add nic devices
our config file is(we called "xl create -c cfg" to create the vm) :
arch = 'x86_64'
#name = "sample_evn_win7_x64"
name = "aap_40"
memory = 1024
vcpus = 1
builder='hvm'
boot = "cd"
vif = [ "mac=18:C5:8A:15:C1:39,bridge=virbr0" ]
hap = 1
acpi = 1
altp2mhvm = 1
shadow_memory = 16
disk = [ 'tap:qcow2:/home/huawei/vm/win7_x64/win7_work40.qcow2,xvda,w' ]
vnc = 1
vnclisten="0.0.0.0"
usb=1
usbdevice = "tablet"
could you plz help us why this error happen? How to resolve it ? thank you, wish your reply.
-----邮件原件-----
发件人: Wei Liu [mailto:wei.liu2@citrix.com]
发送时间: 2017年11月13日 17:51
收件人: Chenjia (C) <chenjia09@huawei.com>
抄送: wei.liu2@citrix.com; HUANG SHENGQIANG <huang.shengqiang@huawei.com>; Yaoshaomin <yaoshaomin@huawei.com>; xen-devel@lists.xenproject.org
主题: Re: 答复: Re: [Xen-devel] Help:Can xen restore several snapshots more faster at same time?
Please avoid top-posting.
On Mon, Nov 13, 2017 at 08:25:16AM +0000, Chenjia (C) wrote:
> 1. is there some way to improve the xenstored process performance?
>
The latest version of Cxenstored and Oxenstored have improved transaction handling. Not sure which version you're using.
> 2. We also try the virsh tool to restore several xen at same
> time ,we fount the virsh can restore 40+ snapshot(1G per snapshot) at
> same time, the performance is good when snapshot is all in ramdisk.
> But we can’t let all the snapshot is always in ramdisk for it is too
> big. Is there some way to reduce these virsh snapshots space?(these
> snapshot is same: same OS, same config, but need with different IP
> address)
For libvirt question you need to ask on libvirt list.
>
> Would you please offer us some feedback? Thanks.
> By the way, can we talk with Chinese if possible?
Sorry but communication on mailing list needs to be in English so other people can join the discussion and / or provide suggestions.
>
> 发件人: HUANG SHENGQIANG
> 发送时间: 2017年11月6日 18:32
> 收件人: Chenjia (C) <chenjia09@huawei.com<mailto:chenjia09@huawei.com>>
> 主题: FW: Re: [Xen-devel] Help:Can xen restore several snapshots more
> faster at same time?
>
> --------------------------------------------------
> HUANG SHENGQIANG HUANG SHENGQIANG
> M:
> +86-18201587800<tel:+86-18201587800>(优先)/+1-6046180423<tel:+1-60461804
> 23>(海外出差)
> E: huang.shengqiang@huawei.com<mailto:huang.shengqiang@huawei.com>
> 产品与解决方案-交换机与企业网关解决方案架构与设计部
> Products & Solutions-Switch & Enterprise Gateway Solution Architecture
> & Design Dept From:Wei Liu To:HUANG SHENGQIANG,
> Cc:xen-devel@lists.xenproject.org,Wangjianbing,Wei Liu,
> Date:2017-11-06 18:28:58
> Subject:Re: [Xen-devel] Help:Can xen restore several snapshots more
> faster at same time?
>
> On Mon, Nov 06, 2017 at 04:38:51AM +0000, HUANG SHENGQIANG wrote:
> > Dear XEN expert,
> >
> > I find a blocker issue in my project. Would you please offer us some feedback?
> >
> > The description from my development team:
> > we need restore as much as xen snapshot at same times, but we found ‘xl restore’ command is work linearly, if we want to restore a new xen snapshot, we need to wait for the previous snapshot finish it’s work. We try to debug the xl source ,we found the follow information:
> > [cid:image001.png@01D356F6.B8EE87E0]
> >
>
> Please don't send pictures.
>
> > When an snapshot is being restore, we can see another process is blocked. We trying to delete the acquire_lock from the source code , then we see all the snapshots are being restore at same time, but restore is still very slow, one snapshot needs about 25 seconds to finish restore(our environment is cpu E52620, 256G memory, SSD hard disk. The snapshot is Win7 OS with 2G memory).
> >
>
> There is a lock in xl as you can see in the stack trace.
>
> > So , does xen have the way to restore more faster when several
> > snapshot is restore at same time? Why KVM can restore several
> > snapshot fast at same time? (We try the same experiment in KVM, we
> > got we can restore about 50+ snapshot in 20S. )
> >
>
> Part of the toolstack needs to be reworked. You can start by removing
> the lock in xl to see what breaks.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 答复: 答复: Re: Help:Can xen restore several snapshots more faster at same time?
2017-11-27 12:35 ` 答复: " Chenjia (C)
@ 2017-11-27 12:39 ` Wei Liu
2017-11-28 1:38 ` 答复: " Chenjia (C)
0 siblings, 1 reply; 7+ messages in thread
From: Wei Liu @ 2017-11-27 12:39 UTC (permalink / raw)
To: Chenjia (C); +Cc: xen-devel@lists.xenproject.org, Wei Liu
On Mon, Nov 27, 2017 at 12:35:26PM +0000, Chenjia (C) wrote:
> when we use the Xen4.8.7(we compile from the source and install on SUSE 12 ), if we create the VM, we got the error information:
>
> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge online [8284] exited with error status 127
> libxl: error: libxl_create.c:1461:domcreate_attach_devices: unable to add nic devices
>
Use xl -vvv create to get more logs. Fundamentally the script failed,
but there is not enough information to tell why it failed.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* 答复: 答复: 答复: Re: Help:Can xen restore several snapshots more faster at same time?
2017-11-27 12:39 ` Wei Liu
@ 2017-11-28 1:38 ` Chenjia (C)
0 siblings, 0 replies; 7+ messages in thread
From: Chenjia (C) @ 2017-11-28 1:38 UTC (permalink / raw)
To: Wei Liu; +Cc: xen-devel@lists.xenproject.org
linux-lv5f:/home/huawei/vm/win7_x64 # Parsing config from win7_40.cfg
WARNING: you seem to be using "kernel" directive to override HVM guest firmware. Ignore that. Use "firmware_override" instead if you really want a non-default firmware
libxl: debug: libxl_create.c:1614:do_domain_create: ao 0xc43120: create: how=(nil) callback=(nil) poller=0xc395c0
libxl: debug: libxl_device.c:361:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:324:disk_try_backend: Disk vdev=xvda, backend phy unsuitable due to format qcow2
libxl: debug: libxl_device.c:396:libxl__device_disk_set_backend: Disk vdev=xvda, using backend qdisk
libxl: debug: libxl_create.c:970:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:324:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc3a0d0: deregister unregistered
libxl: debug: libxl_numa.c:502:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=1, nr_cpus=12, nr_vcpus=13, free_memkb=2014
libxl: debug: libxl_numa.c:502:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=1, nr_cpus=12, nr_vcpus=1, free_memkb=43799
libxl: detail: libxl_dom.c:182:numa_place_domain: NUMA placement candidate with 1 nodes, 12 cpus and 43799 KB free selected
domainbuilder: detail: xc_dom_allocate: cmdline="(null)", features="(null)"
domainbuilder: detail: xc_dom_kernel_file: filename="/usr/lib/xen/boot/hvmloader"
domainbuilder: detail: xc_dom_malloc_filemap : 463 kB
libxl: debug: libxl_dom.c:884:libxl__load_hvm_firmware_module: Loading BIOS: /usr/lib/xen/boot/seabios.bin
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.8, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
domainbuilder: detail: loader probe OK
xc: detail: ELF: phdr: paddr=0x100000 memsz=0x7bdc4
xc: detail: ELF: memory: 0x100000 -> 0x17bdc4
domainbuilder: detail: xc_dom_mem_init: mem 1016 MB, pages 0x3f800 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x3f800 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: xc_dom_malloc : 2032 kB
xc: detail: PHYSICAL MEMORY ALLOCATION:
xc: detail: 4KB PAGES: 0x0000000000000200
xc: detail: 2MB PAGES: 0x00000000000001fb
xc: detail: 1GB PAGES: 0x0000000000000000
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0x7c at 0x7fd11a3e0000
domainbuilder: detail: xc_dom_alloc_segment: kernel : 0x100000 -> 0x17c000 (pfn 0x100 + 0x7c pages)
xc: detail: ELF: phdr 0 at 0x7fd11a364000 -> 0x7fd11a3d6220
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x17c+0x40 at 0x7fd11a3a0000
domainbuilder: detail: xc_dom_alloc_segment: System Firmware module : 0x17c000 -> 0x1bc000 (pfn 0x17c + 0x40 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x1bc+0x1 at 0x7fd11a535000
domainbuilder: detail: xc_dom_alloc_segment: HVM start info : 0x1bc000 -> 0x1bd000 (pfn 0x1bc + 0x1 pages)
domainbuilder: detail: alloc_pgtables_hvm: doing nothing
domainbuilder: detail: xc_dom_build_image : virt_alloc_end : 0x1bd000
domainbuilder: detail: xc_dom_build_image : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
domainbuilder: detail: clear_page: pfn 0xfefff, mfn 0xfefff
domainbuilder: detail: clear_page: pfn 0xfeffc, mfn 0xfeffc
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail: allocated
domainbuilder: detail: malloc : 2039 kB
domainbuilder: detail: anon mmap : 0 bytes
domainbuilder: detail: mapped
domainbuilder: detail: file mmap : 463 kB
domainbuilder: detail: domU mmap : 756 kB
domainbuilder: detail: vcpu_hvm: called
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0xff000
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0xff001
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_device.c:361:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=qdisk
libxl: debug: libxl_linux.c:221:libxl__get_hotplug_script_info: backend_kind 3, no need to execute scripts
libxl: debug: libxl_device.c:1143:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc3c100: deregister unregistered
libxl: debug: libxl.c:2899:libxl__device_disk_find_local_path: Directly accessing local QDISK target /home/huawei/vm/win7_x64/win7_work40.qcow2
libxl: debug: libxl_dm.c:1500:libxl__build_device_model_args_new: Could not find user xen-qemuuser-shared, starting QEMU as root
libxl: debug: libxl_dm.c:2094:libxl__spawn_local_dm: Spawning device-model /usr/lib/xen/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: /usr/lib/xen/bin/qemu-system-i386
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -xen-domid
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: 10
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -chardev
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-10,server,nowait
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -no-shutdown
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -mon
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -chardev
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-10,server,nowait
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -mon
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: chardev=libxenstat-cmd,mode=control
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -nodefaults
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -no-user-config
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -name
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: aap_40
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -vnc
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: 0.0.0.0:0,to=99
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -display
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: none
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -device
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: cirrus-vga,vgamem_mb=8
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -boot
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: order=cd
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -usb
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -usbdevice
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: tablet
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -device
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: rtl8139,id=nic0,netdev=net0,mac=18:c5:8a:15:c1:39
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -netdev
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: type=tap,id=net0,ifname=vif10.0-emu,script=no,downscript=no
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -machine
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: xenfv
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -m
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: 1016
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: -drive
libxl: debug: libxl_dm.c:2096:libxl__spawn_local_dm: file=/home/huawei/vm/win7_x64/win7_work40.qcow2,if=ide,index=0,media=disk,format=qcow2,cache=writeback
libxl: debug: libxl_dm.c:2098:libxl__spawn_local_dm: Spawning device-model /usr/lib/xen/bin/qemu-system-i386 with additional environment:
libxl: debug: libxl_dm.c:2100:libxl__spawn_local_dm: XEN_QEMU_CONSOLE_LIMIT=1048576
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0xc3a3c8 wpath=/local/domain/0/device-model/10/state token=3/0: register slotnum=3
libxl: debug: libxl_create.c:1640:do_domain_create: ao 0xc43120: inprogress: poller=0xc395c0, flags=i
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0xc3a3c8 wpath=/local/domain/0/device-model/10/state token=3/0: event epath=/local/domain/0/device-model/10/state
libxl: debug: libxl_exec.c:398:spawn_watch_event: domain 10 device model: spawn watch p=(null)
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0xc3a3c8 wpath=/local/domain/0/device-model/10/state token=3/0: event epath=/local/domain/0/device-model/10/state
libxl: debug: libxl_exec.c:398:spawn_watch_event: domain 10 device model: spawn watch p=running
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0xc3a3c8 wpath=/local/domain/0/device-model/10/state token=3/0: deregister slotnum=3
libxl: debug: libxl_exec.c:129:libxl_report_child_exitstatus: domain 10 device model (dying as expected) [11682] died due to fatal signal Killed
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc3a3c8: deregister unregistered
libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-10
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:556:qmp_send_prepare: next qmp command: '{
"execute": "qmp_capabilities",
"id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:556:qmp_send_prepare: next qmp command: '{
"execute": "query-chardev",
"id": 2
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:556:qmp_send_prepare: next qmp command: '{
"execute": "query-vnc",
"id": 3
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0xc41c40 wpath=/local/domain/0/backend/vif/10/0/state token=3/1: register slotnum=3
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0xc41c40 wpath=/local/domain/0/backend/vif/10/0/state token=3/1: event epath=/local/domain/0/backend/vif/10/0/state
libxl: debug: libxl_event.c:878:devstate_callback: backend /local/domain/0/backend/vif/10/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0xc41c40 wpath=/local/domain/0/backend/vif/10/0/state token=3/1: event epath=/local/domain/0/backend/vif/10/0/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vif/10/0/state wanted state 2 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0xc41c40 wpath=/local/domain/0/backend/vif/10/0/state token=3/1: deregister slotnum=3
libxl: debug: libxl_device.c:1059:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc41c40: deregister unregistered
libxl: debug: libxl_device.c:1157:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_device.c:1158:device_hotplug: extra args:
libxl: debug: libxl_device.c:1164:device_hotplug: type_if=vif
libxl: debug: libxl_device.c:1166:device_hotplug: env:
libxl: debug: libxl_device.c:1173:device_hotplug: script: /etc/xen/scripts/vif-bridge
libxl: debug: libxl_device.c:1173:device_hotplug: XENBUS_TYPE: vif
libxl: debug: libxl_device.c:1173:device_hotplug: XENBUS_PATH: backend/vif/10/0
libxl: debug: libxl_device.c:1173:device_hotplug: XENBUS_BASE_PATH: backend
libxl: debug: libxl_device.c:1173:device_hotplug: netdev:
libxl: debug: libxl_device.c:1173:device_hotplug: INTERFACE: vif10.0-emu
libxl: debug: libxl_device.c:1173:device_hotplug: vif: vif10.0
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/vif-bridge online
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge online [11694] exited with error status 127
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc41d40: deregister unregistered
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc41d40: deregister unregistered
libxl: error: libxl_create.c:1461:domcreate_attach_devices: unable to add nic devices
libxl: debug: libxl_dm.c:2306:kill_device_model: Device Model signaled
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0xc40960 wpath=/local/domain/0/backend/vif/10/0/state token=3/2: register slotnum=3
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0xc40960 wpath=/local/domain/0/backend/vif/10/0/state token=3/2: event epath=/local/domain/0/backend/vif/10/0/state
libxl: debug: libxl_event.c:878:devstate_callback: backend /local/domain/0/backend/vif/10/0/state wanted state 6 still waiting state 5
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0xc40960 wpath=/local/domain/0/backend/vif/10/0/state token=3/2: event epath=/local/domain/0/backend/vif/10/0/state
libxl: debug: libxl_event.c:874:devstate_callback: backend /local/domain/0/backend/vif/10/0/state wanted state 6 ok
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0xc40960 wpath=/local/domain/0/backend/vif/10/0/state token=3/2: deregister slotnum=3
libxl: debug: libxl_device.c:1059:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc40960: deregister unregistered
libxl: debug: libxl_device.c:1157:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge offline
libxl: debug: libxl_device.c:1158:device_hotplug: extra args:
libxl: debug: libxl_device.c:1164:device_hotplug: type_if=vif
libxl: debug: libxl_device.c:1166:device_hotplug: env:
libxl: debug: libxl_device.c:1173:device_hotplug: script: /etc/xen/scripts/vif-bridge
libxl: debug: libxl_device.c:1173:device_hotplug: XENBUS_TYPE: vif
libxl: debug: libxl_device.c:1173:device_hotplug: XENBUS_PATH: backend/vif/10/0
libxl: debug: libxl_device.c:1173:device_hotplug: XENBUS_BASE_PATH: backend
libxl: debug: libxl_device.c:1173:device_hotplug: netdev:
libxl: debug: libxl_device.c:1173:device_hotplug: INTERFACE: vif10.0-emu
libxl: debug: libxl_device.c:1173:device_hotplug: vif: vif10.0
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/vif-bridge offline
libxl: debug: libxl_event.c:542:watchfd_callback: watch epath=/local/domain/0/backend/vif/10/0/state token=3/2: empty slot
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc40a60: deregister unregistered
libxl: debug: libxl_device.c:1157:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge remove
libxl: debug: libxl_device.c:1158:device_hotplug: extra args:
libxl: debug: libxl_device.c:1164:device_hotplug: type_if=tap
libxl: debug: libxl_device.c:1166:device_hotplug: env:
libxl: debug: libxl_device.c:1173:device_hotplug: script: /etc/xen/scripts/vif-bridge
libxl: debug: libxl_device.c:1173:device_hotplug: XENBUS_TYPE: vif
libxl: debug: libxl_device.c:1173:device_hotplug: XENBUS_PATH: backend/vif/10/0
libxl: debug: libxl_device.c:1173:device_hotplug: XENBUS_BASE_PATH: backend
libxl: debug: libxl_device.c:1173:device_hotplug: netdev:
libxl: debug: libxl_device.c:1173:device_hotplug: INTERFACE: vif10.0-emu
libxl: debug: libxl_device.c:1173:device_hotplug: vif: vif10.0
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execute: /etc/xen/scripts/vif-bridge remove
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc40a60: deregister unregistered
libxl: debug: libxl_linux.c:213:libxl__get_hotplug_script_info: num_exec 2, not running hotplug scripts
libxl: debug: libxl_device.c:1143:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc40a60: deregister unregistered
libxl: debug: libxl_linux.c:221:libxl__get_hotplug_script_info: backend_kind 3, no need to execute scripts
libxl: debug: libxl_device.c:1143:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc3e1a0: deregister unregistered
libxl: debug: libxl_linux.c:221:libxl__get_hotplug_script_info: backend_kind 6, no need to execute scripts
libxl: debug: libxl_device.c:1143:device_hotplug: No hotplug script to execute
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0xc406a0: deregister unregistered
libxl: debug: libxl.c:1712:devices_destroy_cb: forked pid 11834 for destroy of domain 10
libxl: debug: libxl_event.c:1869:libxl__ao_complete: ao 0xc43120: complete, rc=-3
libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0xc43120: destroy
libxl: debug: libxl.c:1445:libxl_domain_destroy: ao 0xc3b6a0: create: how=(nil) callback=(nil) poller=0xc395c0
libxl: error: libxl.c:1575:libxl__destroy_domid: non-existant domain 10
libxl: error: libxl.c:1534:domain_destroy_callback: unable to destroy guest with domid 10
libxl: error: libxl.c:1463:domain_destroy_cb: destruction of domain 10 failed
libxl: debug: libxl_event.c:1869:libxl__ao_complete: ao 0xc3b6a0: complete, rc=-21
libxl: debug: libxl.c:1454:libxl_domain_destroy: ao 0xc3b6a0: inprogress: poller=0xc395c0, flags=ic
libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0xc3b6a0: destroy
xencall:buffer: debug: total allocations:512 total releases:512
xencall:buffer: debug: current allocations:0 maximum allocations:3
xencall:buffer: debug: cache current size:3
xencall:buffer: debug: cache hits:490 misses:3 toobig:19
in the /var/log/message, have following message
2017-11-28T18:05:12.505919+08:00 linux-lv5f root: /etc/xen/scripts/vif-bridge: online type_if=vif XENBUS_PATH=backend/vif/11/0
2017-11-28T18:05:12.590404+08:00 linux-lv5f root: /etc/xen/scripts/vif-bridge: offline type_if=vif XENBUS_PATH=backend/vif/11/0
2017-11-28T18:05:12.621806+08:00 linux-lv5f root: /etc/xen/scripts/vif-bridge: brctl delif virbr0 vif11.0 failed
2017-11-28T18:05:12.632249+08:00 linux-lv5f root: /etc/xen/scripts/vif-bridge: ifconfig vif11.0 down failed
2017-11-28T18:05:12.654086+08:00 linux-lv5f root: /etc/xen/scripts/vif-bridge: Successful vif-bridge offline for vif11.0, bridge virbr0.
2017-11-28T18:05:12.680121+08:00 linux-lv5f root: /etc/xen/scripts/vif-bridge: remove type_if=tap XENBUS_PATH=backend/vif/11/0
2017-11-28T18:05:12.727256+08:00 linux-lv5f root: /etc/xen/scripts/vif-bridge: Successful vif-bridge remove for vif11.0-emu, bridge virbr0.
Thank you , wish for your reply.
-----邮件原件-----
发件人: Wei Liu [mailto:wei.liu2@citrix.com]
发送时间: 2017年11月27日 20:40
收件人: Chenjia (C) <chenjia09@huawei.com>
抄送: Wei Liu <wei.liu2@citrix.com>; xen-devel@lists.xenproject.org
主题: Re: 答复: 答复: Re: [Xen-devel] Help:Can xen restore several snapshots more faster at same time?
On Mon, Nov 27, 2017 at 12:35:26PM +0000, Chenjia (C) wrote:
> when we use the Xen4.8.7(we compile from the source and install on SUSE 12 ), if we create the VM, we got the error information:
>
> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
> /etc/xen/scripts/vif-bridge online [8284] exited with error status 127
> libxl: error: libxl_create.c:1461:domcreate_attach_devices: unable to
> add nic devices
>
Use xl -vvv create to get more logs. Fundamentally the script failed, but there is not enough information to tell why it failed.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2017-11-28 1:39 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-11-06 4:38 Help:Can xen restore several snapshots more faster at same time? HUANG SHENGQIANG
2017-11-06 10:28 ` Wei Liu
2017-11-13 8:25 ` 答复: " Chenjia (C)
2017-11-13 9:50 ` Wei Liu
2017-11-27 12:35 ` 答复: " Chenjia (C)
2017-11-27 12:39 ` Wei Liu
2017-11-28 1:38 ` 答复: " Chenjia (C)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).