* PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin]
2016-11-29 13:34 ` IP-Projects - Support
@ 2016-11-29 18:16 ` Dario Faggioli
0 siblings, 0 replies; 6+ messages in thread
From: Dario Faggioli @ 2016-11-29 18:16 UTC (permalink / raw)
To: IP-Projects - Support
Cc: 'xen-devel@lists.xenproject.org', Wei Liu,
Roger Pau Monne
[-- Attachment #1.1: Type: text/plain, Size: 6295 bytes --]
On Tue, 2016-11-29 at 13:34 +0000, IP-Projects - Support wrote:
> Hello,
>
> we see this i think when the vms are stopped or restarted by
> customers (xl destroy vm and then recreating) or I can reprdoce this
> when I stop them all by
> script with a for loop with xl destroy $i .
>
Ok, that makes sense. What is happening to you is that some of the
domain, although dead, are still around as 'zombies', because they've
got outstanding pages/references/etc.
This is clearly visible in the output of the debug keys you provided.
Something similar has been discussed, e.g., here:
https://lists.xenproject.org/archives/html/xen-devel/2013-11/msg03413.h
tml
> It happens with hvm and pvm
>
> testcase all vms started:
>
> root@v34:/var# xl list
> Name ID Mem
> VCPUs State Time(s)
> Domain-0 0 2048 2 r---
> -- 398.0
> vmanager2593 34 512 1 -b--
> -- 1.8
>
> root@v34:/var# /root/scripts/vps_stop.sh
> root@v34:/var# xl list
> Name ID Mem
> VCPUs State Time(s)
> Domain-0 0 2048 2 r--
> --- 420.5
> (null) 34 0 1 --p
> --d 2.3
>
Just for the sake of completeness, can we see what's in vps_stop.sh?
> root@v34:/var/log/xen# cat xl-vmanager2593.log
> Waiting for domain vmanager2593 (domid 34) to die [pid 23747]
> Domain 34 has been destroyed.
>
Ok, thanks. Not much indeed. One way to increase the amount of
information would be to start the domains with:
xl -vvv create /etc/xen/vmanager2593.cfg
This will add logs coming from xl and libxl, which may not be where the
problem really is, but I think it's worth a try. Be aware that this
will make your terminal/console/whatever very busy, if you start a lot
of VMs at the same time.
From the config you posted (and that I removed) I see it's a PV guest,
so I'm not asking for any device model logs, in this case.
> /var/log/xen/xen-hotplug.log does not log anything. Any hint why?
>
I've no idea, but I'm not even sure what kind of log that contains (I
guess stuff related to hotplug scripts).
So, here we are:
> (XEN) 'q' pressed -> dumping domain info (now=0x16B:4C7A5CC3)
> (XEN) General information for domain 34:
> (XEN) refcnt=1 dying=2 pause_count=2
> (XEN) nr_pages=122 xenheap_pages=0 shared_pages=0 paged_pages=0
> dirty_cpus={} max_pages=131328
>
As you see, there are outstanding pages. That's what is keeping the
domain around.
> (XEN) handle=2a991534-312f-465a-9dff-f9a9fb1baadd
> vm_assist=0000002d
> (XEN) Rangesets belonging to domain 34:
> (XEN) I/O Ports { }
> (XEN) log-dirty { }
> (XEN) Interrupts { }
> (XEN) I/O Memory { }
> (XEN) Memory pages belonging to domain 34:
> (XEN) DomPage 00000000005b9041: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b9042: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b9043: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b9044: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b9045: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b9046: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b9047: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b9048: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b9049: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b904a: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b904b: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b904c: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b904d: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b904e: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b904f: caf=00000001,
> taf=7400000000000001
> (XEN) DomPage 00000000005b9050: caf=00000001,
> taf=7400000000000001
> (XEN) NODE affinity for domain 34: [0]
> (XEN) VCPU information and callbacks for domain 34:
> (XEN) VCPU0: CPU4 [has=F] poll=0 upcall_pend=00 upcall_mask=01
> dirty_cpus={}
> (XEN) cpu_hard_affinity={4-7} cpu_soft_affinity={0-7}
> (XEN) pause_count=0 pause_flags=0
> (XEN) No periodic timer
> (XEN) Notifying guest 0:0 (virq 1, port 5)
> (XEN) Notifying guest 0:1 (virq 1, port 12)
> (XEN) Notifying guest 34:0 (virq 1, port 0)
> (XEN) Shared frames 0 -- Saved frames 0
>
> (XEN) gnttab_usage_print_all [ key 'g' pressed
> (XEN) -------- active -------- -------- shared --------
> (XEN) [ref] localdom mfn pin localdom gmfn flags
> (XEN) grant-table for remote domain: 34 (v1)
> (XEN) [ 8] 0 0x5b8f05 0x00000001 0 0x5b8f05 0x19
> (XEN) [770] 0 0x5b90ba 0x00000001 0 0x5b90ba 0x19
> (XEN) [802] 0 0x5b90b9 0x00000001 0 0x5b90b9 0x19
> (XEN) [803] 0 0x5b90b8 0x00000001 0 0x5b90b8 0x19
> [snip]
>
And here they are the grants!
I'm Cc-ing someone who knows more than me about grants... In the
meanwhile, can you state again what it is exactly that you are using,
such as:
- what Xen version?
- what Dom0 kernel version?
- about DomU kernel version, I see from this in the config file:
vmlinuz-4.8.10-xen, so it's Linux 4.8.0, is that right?
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin]
@ 2016-11-29 19:43 IP-Projects - Support
0 siblings, 0 replies; 6+ messages in thread
From: IP-Projects - Support @ 2016-11-29 19:43 UTC (permalink / raw)
To: 'Xen-devel'; +Cc: 'Dario Faggioli'
Another try:
root@v34:~# xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 2048 2 r----- 600.6
(null) 37 0 1 --p--d 6.6
(null) 41 0 1 --p--d 2.3
ID 37 was vmanager1866 (HVM)
root@v34:~# cat /var/log/xen/xl-vmanager1866.log
Waiting for domain vmanager1866 (domid 37) to die [pid 27967]
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x564ce2315560 wpath=@releaseDomain token=3/0: register slotnum=3
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x564ce23152a0 wpath=/local/domain/37/device/vbd/51744/eject token=2/1: register slotnum=2
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x564ce2315560 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x564ce2314a40:37] nentries=1 rc=1 37..37
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x564ce2314a40:37] got=domaininfos[0] got->domain=37
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff0202
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x564ce23152a0 wpath=/local/domain/37/device/vbd/51744/eject token=2/1: event epath=/local/domain/37/device/vbd/51744/eject
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x564ce2315560 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x564ce2314a40:37] nentries=1 rc=1 37..37
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x564ce2314a40:37] got=domaininfos[0] got->domain=37
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff0212
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x564ce2315560 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x564ce2314a40:37] nentries=1 rc=1 37..37
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x564ce2314a40:37] got=domaininfos[0] got->domain=37
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff0212
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x564ce23152a0 wpath=/local/domain/37/device/vbd/51744/eject token=2/1: event epath=/local/domain/37/device/vbd/51744/eject
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x564ce23152a0 wpath=/local/domain/37/device/vbd/51744/eject token=2/1: event epath=/local/domain/37/device/vbd/51744/eject
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x564ce23152a0 wpath=/local/domain/37/device/vbd/51744/eject token=2/1: event epath=/local/domain/37/device/vbd/51744/eject
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x564ce2315560 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x564ce2314a40:37] nentries=1 rc=1 37..37
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x564ce2314a40:37] got=domaininfos[0] got->domain=37
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff020b
libxl: debug: libxl.c:1133:domain_death_occurred: dying
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
Domain 37 has been destroyed.
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x564ce2315560 wpath=@releaseDomain token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x564ce23152a0 wpath=/local/domain/37/device/vbd/51744/eject token=2/1: deregister slotnum=2
xencall:buffer: debug: total allocations:8 total releases:8
xencall:buffer: debug: current allocations:0 maximum allocations:2
xencall:buffer: debug: cache current size:2
xencall:buffer: debug: cache hits:6 misses:2 toobig:0
Degug-keys q for ID 37:
(XEN) General information for domain 37:
(XEN) refcnt=1 dying=2 pause_count=2
(XEN) nr_pages=1 xenheap_pages=0 shared_pages=0 paged_pages=0 dirty_cpus={} max_pages=524544
(XEN) handle=ac56f6e7-6e04-44fe-8c26-c3bbc320a30b vm_assist=00000000
(XEN) paging assistance: hap refcounts translate external
(XEN) Rangesets belonging to domain 37:
(XEN) I/O Ports { }
(XEN) log-dirty { }
(XEN) Interrupts { }
(XEN) I/O Memory { }
(XEN) Memory pages belonging to domain 37:
(XEN) DomPage 00000000006e6052: caf=00000001, taf=0000000000000000
(XEN) PoD entries=0 cachesize=0
(XEN) NODE affinity for domain 37: [0]
(XEN) VCPU information and callbacks for domain 37:
(XEN) VCPU0: CPU7 [has=F] poll=0 upcall_pend=01 upcall_mask=00 dirty_cpus={}
(XEN) cpu_hard_affinity={4-7} cpu_soft_affinity={0-7}
(XEN) pause_count=0 pause_flags=0
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
root@v34:/var/log/xen# cat qemu-dm-vmanager1866.log
char device redirected to /dev/pts/3 (label serial0)
xen be: vkbd-0: initialise() failed
xen be: vkbd-0: initialise() failed
xen be: vkbd-0: initialise() failed
qemu: terminating on signal 1 from pid 30417
ID 41 was vmanager2593 (PVM)
root@v34:~# cat /var/log/xen/xl-vmanager2593.log
Waiting for domain vmanager2593 (domid 41) to die [pid 29480]
libxl: debug: libxl_event.c:636:libxl__ev_xswatch_register: watch w=0x55670b4b1260 wpath=@releaseDomain token=3/0: register slotnum=3
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x55670b4b1260 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] nentries=1 rc=1 41..41
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] got=domaininfos[0] got->domain=41
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff0000
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x55670b4b1260 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] nentries=1 rc=1 41..41
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] got=domaininfos[0] got->domain=41
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff0010
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x55670b4b1260 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] nentries=1 rc=1 41..41
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] got=domaininfos[0] got->domain=41
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff0010
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x55670b4b1260 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] nentries=1 rc=1 41..41
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] got=domaininfos[0] got->domain=41
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff0010
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x55670b4b1260 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] nentries=1 rc=1 41..41
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] got=domaininfos[0] got->domain=41
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff0010
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x55670b4b1260 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] nentries=1 rc=1 41..41
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] got=domaininfos[0] got->domain=41
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff0010
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x55670b4b1260 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] nentries=1 rc=1 41..41
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] got=domaininfos[0] got->domain=41
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff0010
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
libxl: debug: libxl_event.c:573:watchfd_callback: watch w=0x55670b4b1260 wpath=@releaseDomain token=3/0: event epath=@releaseDomain
libxl: debug: libxl.c:1176:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] nentries=1 rc=1 41..41
libxl: debug: libxl.c:1187:domain_death_xswatch_callback: [evg=0x55670b4aaa40:41] got=domaininfos[0] got->domain=41
libxl: debug: libxl.c:1213:domain_death_xswatch_callback: exists shutdown_reported=0 dominf.flags=ffff0009
libxl: debug: libxl.c:1133:domain_death_occurred: dying
libxl: debug: libxl.c:1180:domain_death_xswatch_callback: [evg=0] all reported
libxl: debug: libxl.c:1242:domain_death_xswatch_callback: domain death search done
Domain 41 has been destroyed.
libxl: debug: libxl_event.c:673:libxl__ev_xswatch_deregister: watch w=0x55670b4b1260 wpath=@releaseDomain token=3/0: deregister slotnum=3
xencall:buffer: debug: total allocations:16 total releases:16
xencall:buffer: debug: current allocations:0 maximum allocations:2
xencall:buffer: debug: cache current size:2
xencall:buffer: debug: cache hits:14 misses:2 toobig:0
Debug-keys q for ID 41:
(XEN) General information for domain 41:
(XEN) refcnt=1 dying=2 pause_count=2
(XEN) nr_pages=147 xenheap_pages=0 shared_pages=0 paged_pages=0 dirty_cpus={} max_pages=131328
(XEN) handle=2af2c99b-f446-4646-bcda-ae77fdb2f2f7 vm_assist=0000002d
(XEN) Rangesets belonging to domain 41:
(XEN) I/O Ports { }
(XEN) log-dirty { }
(XEN) Interrupts { }
(XEN) I/O Memory { }
(XEN) Memory pages belonging to domain 41:
(XEN) DomPage 00000000004790fd: caf=00000001, taf=7400000000000001
(XEN) DomPage 00000000004790fe: caf=00000001, taf=7400000000000001
(XEN) DomPage 00000000004790ff: caf=00000001, taf=7400000000000001
(XEN) DomPage 0000000000479100: caf=00000001, taf=7400000000000001
(XEN) DomPage 0000000000479101: caf=00000001, taf=7400000000000001
(XEN) DomPage 0000000000479102: caf=00000001, taf=7400000000000001
(XEN) DomPage 0000000000479103: caf=00000001, taf=7400000000000001
(XEN) DomPage 0000000000479104: caf=00000001, taf=7400000000000001
(XEN) DomPage 0000000000479105: caf=00000001, taf=7400000000000001
(XEN) DomPage 0000000000479106: caf=00000001, taf=7400000000000001
(XEN) DomPage 0000000000479107: caf=00000001, taf=7400000000000001
(XEN) DomPage 0000000000479108: caf=00000001, taf=7400000000000001
(XEN) DomPage 0000000000479109: caf=00000001, taf=7400000000000001
(XEN) DomPage 000000000047910a: caf=00000001, taf=7400000000000001
(XEN) DomPage 000000000047910b: caf=00000001, taf=7400000000000001
(XEN) DomPage 000000000047910c: caf=00000001, taf=7400000000000001
(XEN) NODE affinity for domain 41: [0]
(XEN) VCPU information and callbacks for domain 41:
(XEN) VCPU0: CPU7 [has=F] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={}
(XEN) cpu_hard_affinity={4-7} cpu_soft_affinity={0-7}
(XEN) pause_count=0 pause_flags=0
(XEN) No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 5)
(XEN) Notifying guest 0:1 (virq 1, port 12)
(XEN) Notifying guest 37:0 (virq 1, port 0)
(XEN) Notifying guest 41:0 (virq 1, port 0)
(XEN) Shared frames 0 -- Saved frames 0
Debug-keys g:
(XEN) gnttab_usage_print_all [ key 'g' pressed
(XEN) -------- active -------- -------- shared --------
(XEN) [ref] localdom mfn pin localdom gmfn flags
(XEN) grant-table for remote domain: 0 ... no active grant table entries
(XEN) -------- active -------- -------- shared --------
(XEN) [ref] localdom mfn pin localdom gmfn flags
(XEN) grant-table for remote domain: 37 (v1)
(XEN) [ 9] 0 0x6e6052 0x00000001 0 0x037052 0x19
(XEN) -------- active -------- -------- shared --------
(XEN) [ref] localdom mfn pin localdom gmfn flags
(XEN) grant-table for remote domain: 41 (v1)
(XEN) [ 8] 0 0x478eff 0x00000001 0 0x478eff 0x19
(XEN) [770] 0 0x47918f 0x00000001 0 0x47918f 0x19
(XEN) [802] 0 0x47918e 0x00000001 0 0x47918e 0x19
(XEN) [803] 0 0x47918d 0x00000001 0 0x47918d 0x19
(XEN) [804] 0 0x47918c 0x00000001 0 0x47918c 0x19
(XEN) [805] 0 0x47918b 0x00000001 0 0x47918b 0x19
(XEN) [806] 0 0x47918a 0x00000001 0 0x47918a 0x19
(XEN) [807] 0 0x479189 0x00000001 0 0x479189 0x19
(XEN) [808] 0 0x479188 0x00000001 0 0x479188 0x19
(XEN) [809] 0 0x479187 0x00000001 0 0x479187 0x19
(XEN) [810] 0 0x479186 0x00000001 0 0x479186 0x19
(XEN) [811] 0 0x479185 0x00000001 0 0x479185 0x19
(XEN) [812] 0 0x479184 0x00000001 0 0x479184 0x19
(XEN) [813] 0 0x479183 0x00000001 0 0x479183 0x19
(XEN) [814] 0 0x479182 0x00000001 0 0x479182 0x19
(XEN) [815] 0 0x479181 0x00000001 0 0x479181 0x19
(XEN) [816] 0 0x479180 0x00000001 0 0x479180 0x19
(XEN) [817] 0 0x47917f 0x00000001 0 0x47917f 0x19
(XEN) [818] 0 0x47917e 0x00000001 0 0x47917e 0x19
(XEN) [819] 0 0x47917d 0x00000001 0 0x47917d 0x19
(XEN) [820] 0 0x47917c 0x00000001 0 0x47917c 0x19
(XEN) [821] 0 0x47917b 0x00000001 0 0x47917b 0x19
(XEN) [822] 0 0x47917a 0x00000001 0 0x47917a 0x19
(XEN) [823] 0 0x479179 0x00000001 0 0x479179 0x19
(XEN) [824] 0 0x479178 0x00000001 0 0x479178 0x19
(XEN) [825] 0 0x479177 0x00000001 0 0x479177 0x19
(XEN) [826] 0 0x479176 0x00000001 0 0x479176 0x19
(XEN) [827] 0 0x479175 0x00000001 0 0x479175 0x19
(XEN) [828] 0 0x479174 0x00000001 0 0x479174 0x19
(XEN) [829] 0 0x479173 0x00000001 0 0x479173 0x19
(XEN) [830] 0 0x479172 0x00000001 0 0x479172 0x19
(XEN) [831] 0 0x479171 0x00000001 0 0x479171 0x19
(XEN) [832] 0 0x479170 0x00000001 0 0x479170 0x19
(XEN) [833] 0 0x47916f 0x00000001 0 0x47916f 0x19
(XEN) [834] 0 0x47916e 0x00000001 0 0x47916e 0x19
(XEN) [835] 0 0x47916d 0x00000001 0 0x47916d 0x19
(XEN) [836] 0 0x47916c 0x00000001 0 0x47916c 0x19
(XEN) [837] 0 0x47916b 0x00000001 0 0x47916b 0x19
(XEN) [838] 0 0x47916a 0x00000001 0 0x47916a 0x19
(XEN) [839] 0 0x479169 0x00000001 0 0x479169 0x19
(XEN) [840] 0 0x479168 0x00000001 0 0x479168 0x19
(XEN) [841] 0 0x479167 0x00000001 0 0x479167 0x19
(XEN) [842] 0 0x479166 0x00000001 0 0x479166 0x19
(XEN) [843] 0 0x479165 0x00000001 0 0x479165 0x19
(XEN) [844] 0 0x479164 0x00000001 0 0x479164 0x19
(XEN) [845] 0 0x479163 0x00000001 0 0x479163 0x19
(XEN) [846] 0 0x479162 0x00000001 0 0x479162 0x19
(XEN) [847] 0 0x479161 0x00000001 0 0x479161 0x19
(XEN) [848] 0 0x479160 0x00000001 0 0x479160 0x19
(XEN) [849] 0 0x47915f 0x00000001 0 0x47915f 0x19
(XEN) [850] 0 0x47915e 0x00000001 0 0x47915e 0x19
(XEN) [851] 0 0x47915d 0x00000001 0 0x47915d 0x19
(XEN) [852] 0 0x47915c 0x00000001 0 0x47915c 0x19
(XEN) [853] 0 0x47915b 0x00000001 0 0x47915b 0x19
(XEN) [854] 0 0x47915a 0x00000001 0 0x47915a 0x19
(XEN) [855] 0 0x479159 0x00000001 0 0x479159 0x19
(XEN) [856] 0 0x479158 0x00000001 0 0x479158 0x19
(XEN) [857] 0 0x479157 0x00000001 0 0x479157 0x19
(XEN) [858] 0 0x479156 0x00000001 0 0x479156 0x19
(XEN) [859] 0 0x479155 0x00000001 0 0x479155 0x19
(XEN) [860] 0 0x479154 0x00000001 0 0x479154 0x19
(XEN) [861] 0 0x479153 0x00000001 0 0x479153 0x19
(XEN) [862] 0 0x479152 0x00000001 0 0x479152 0x19
(XEN) [863] 0 0x479151 0x00000001 0 0x479151 0x19
(XEN) [864] 0 0x479150 0x00000001 0 0x479150 0x19
(XEN) [865] 0 0x47914f 0x00000001 0 0x47914f 0x19
(XEN) [866] 0 0x47914e 0x00000001 0 0x47914e 0x19
(XEN) [867] 0 0x47914d 0x00000001 0 0x47914d 0x19
(XEN) [868] 0 0x47914c 0x00000001 0 0x47914c 0x19
(XEN) [869] 0 0x47914b 0x00000001 0 0x47914b 0x19
(XEN) [870] 0 0x47914a 0x00000001 0 0x47914a 0x19
(XEN) [871] 0 0x479149 0x00000001 0 0x479149 0x19
(XEN) [872] 0 0x479148 0x00000001 0 0x479148 0x19
(XEN) [873] 0 0x479147 0x00000001 0 0x479147 0x19
(XEN) [874] 0 0x479146 0x00000001 0 0x479146 0x19
(XEN) [875] 0 0x479145 0x00000001 0 0x479145 0x19
(XEN) [876] 0 0x479144 0x00000001 0 0x479144 0x19
(XEN) [877] 0 0x479143 0x00000001 0 0x479143 0x19
(XEN) [878] 0 0x479142 0x00000001 0 0x479142 0x19
(XEN) [879] 0 0x479141 0x00000001 0 0x479141 0x19
(XEN) [880] 0 0x479140 0x00000001 0 0x479140 0x19
(XEN) [881] 0 0x47913f 0x00000001 0 0x47913f 0x19
(XEN) [882] 0 0x47913e 0x00000001 0 0x47913e 0x19
(XEN) [883] 0 0x47913d 0x00000001 0 0x47913d 0x19
(XEN) [884] 0 0x47913c 0x00000001 0 0x47913c 0x19
(XEN) [885] 0 0x47913b 0x00000001 0 0x47913b 0x19
(XEN) [886] 0 0x47913a 0x00000001 0 0x47913a 0x19
(XEN) [887] 0 0x479139 0x00000001 0 0x479139 0x19
(XEN) [888] 0 0x479138 0x00000001 0 0x479138 0x19
(XEN) [889] 0 0x479137 0x00000001 0 0x479137 0x19
(XEN) [890] 0 0x479136 0x00000001 0 0x479136 0x19
(XEN) [891] 0 0x479135 0x00000001 0 0x479135 0x19
(XEN) [892] 0 0x479134 0x00000001 0 0x479134 0x19
(XEN) [893] 0 0x479133 0x00000001 0 0x479133 0x19
(XEN) [894] 0 0x479132 0x00000001 0 0x479132 0x19
(XEN) [895] 0 0x479131 0x00000001 0 0x479131 0x19
(XEN) [896] 0 0x479130 0x00000001 0 0x479130 0x19
(XEN) [897] 0 0x47912f 0x00000001 0 0x47912f 0x19
(XEN) [898] 0 0x47912e 0x00000001 0 0x47912e 0x19
(XEN) [899] 0 0x47912d 0x00000001 0 0x47912d 0x19
(XEN) [900] 0 0x47912c 0x00000001 0 0x47912c 0x19
(XEN) [901] 0 0x47912b 0x00000001 0 0x47912b 0x19
(XEN) [902] 0 0x47912a 0x00000001 0 0x47912a 0x19
(XEN) [903] 0 0x479129 0x00000001 0 0x479129 0x19
(XEN) [904] 0 0x479128 0x00000001 0 0x479128 0x19
(XEN) [905] 0 0x479127 0x00000001 0 0x479127 0x19
(XEN) [906] 0 0x479126 0x00000001 0 0x479126 0x19
(XEN) [907] 0 0x479125 0x00000001 0 0x479125 0x19
(XEN) [908] 0 0x479124 0x00000001 0 0x479124 0x19
(XEN) [909] 0 0x479123 0x00000001 0 0x479123 0x19
(XEN) [910] 0 0x479122 0x00000001 0 0x479122 0x19
(XEN) [911] 0 0x479121 0x00000001 0 0x479121 0x19
(XEN) [912] 0 0x47911f 0x00000001 0 0x47911f 0x19
(XEN) [913] 0 0x47911e 0x00000001 0 0x47911e 0x19
(XEN) [914] 0 0x47911d 0x00000001 0 0x47911d 0x19
(XEN) [915] 0 0x47911c 0x00000001 0 0x47911c 0x19
(XEN) [916] 0 0x47911b 0x00000001 0 0x47911b 0x19
(XEN) [917] 0 0x47911a 0x00000001 0 0x47911a 0x19
(XEN) [918] 0 0x479119 0x00000001 0 0x479119 0x19
(XEN) [919] 0 0x479118 0x00000001 0 0x479118 0x19
(XEN) [920] 0 0x479117 0x00000001 0 0x479117 0x19
(XEN) [921] 0 0x479116 0x00000001 0 0x479116 0x19
(XEN) [922] 0 0x479115 0x00000001 0 0x479115 0x19
(XEN) [923] 0 0x479114 0x00000001 0 0x479114 0x19
(XEN) [924] 0 0x479113 0x00000001 0 0x479113 0x19
(XEN) [925] 0 0x479112 0x00000001 0 0x479112 0x19
(XEN) [926] 0 0x479111 0x00000001 0 0x479111 0x19
(XEN) [927] 0 0x479110 0x00000001 0 0x479110 0x19
(XEN) [928] 0 0x47910f 0x00000001 0 0x47910f 0x19
(XEN) [929] 0 0x47910e 0x00000001 0 0x47910e 0x19
(XEN) [930] 0 0x47910d 0x00000001 0 0x47910d 0x19
(XEN) [931] 0 0x47910c 0x00000001 0 0x47910c 0x19
(XEN) [932] 0 0x47910b 0x00000001 0 0x47910b 0x19
(XEN) [933] 0 0x47910a 0x00000001 0 0x47910a 0x19
(XEN) [934] 0 0x479109 0x00000001 0 0x479109 0x19
(XEN) [935] 0 0x479108 0x00000001 0 0x479108 0x19
(XEN) [936] 0 0x479107 0x00000001 0 0x479107 0x19
(XEN) [937] 0 0x479106 0x00000001 0 0x479106 0x19
(XEN) [938] 0 0x479105 0x00000001 0 0x479105 0x19
(XEN) [939] 0 0x479104 0x00000001 0 0x479104 0x19
(XEN) [940] 0 0x479103 0x00000001 0 0x479103 0x19
(XEN) [941] 0 0x479102 0x00000001 0 0x479102 0x19
(XEN) [942] 0 0x479101 0x00000001 0 0x479101 0x19
(XEN) [943] 0 0x479100 0x00000001 0 0x479100 0x19
(XEN) [944] 0 0x4790ff 0x00000001 0 0x4790ff 0x19
(XEN) [945] 0 0x4790fe 0x00000001 0 0x4790fe 0x19
(XEN) [946] 0 0x4790fd 0x00000001 0 0x4790fd 0x19
(XEN) gnttab_usage_print_all ] done
Mit freundlichen Grüßen
Thomas Toka
- Second Level Support -
IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn
Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de
Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin]
@ 2016-11-29 19:43 IP-Projects - Support
0 siblings, 0 replies; 6+ messages in thread
From: IP-Projects - Support @ 2016-11-29 19:43 UTC (permalink / raw)
To: Xen-devel
Sorry for html mail again.. Next will be plain text..
Mit freundlichen Grüßen
Thomas Toka
- Second Level Support -
IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn
Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de
Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin]
@ 2016-11-29 20:01 Thomas Toka
2016-11-29 20:31 ` Juergen Schinker
0 siblings, 1 reply; 6+ messages in thread
From: Thomas Toka @ 2016-11-29 20:01 UTC (permalink / raw)
To: 'Xen-devel'
[-- Attachment #1.1.1: Type: text/plain, Size: 1078 bytes --]
Hi,
vps_stop.sh does:
- Get vserver ids from db and then stops this in a for loop
- then:
for id in $vserver; do
xl destroy vmanager"$id"
done
xenversion: It happened with 4.4.1 from Debian Jessie, then we upgraded the Hypervisor to 4.8-rc from Debian Stretch. Symptoms are the same.
So now its
(d11) HVM Loader
(d11) Detected Xen v4.8.0-rc
Kernel is our build:
http://mirror.ip-projects.de/kernel/linux-image-4.8.10-xen_4810_amd64.deb (config file inside .deb as you know..)
And yes its Linux 4.8.10 from linux.org. We maintain our own .deb packeges for all latest kernel.
I will do another testcase tomorrow with -vvv started vms and will sent the zombied vm info..
Mit freundlichen Grüßen
Thomas Toka
- Second Level Support -
[logo_mail]
IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn
Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de<mailto:info@ip-projects.de>
Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH
[-- Attachment #1.1.2: Type: text/html, Size: 13254 bytes --]
[-- Attachment #1.2: image001.png --]
[-- Type: image/png, Size: 1043 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin]
2016-11-29 20:01 PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin] Thomas Toka
@ 2016-11-29 20:31 ` Juergen Schinker
2016-11-30 7:07 ` Thomas Toka
0 siblings, 1 reply; 6+ messages in thread
From: Juergen Schinker @ 2016-11-29 20:31 UTC (permalink / raw)
To: Thomas Toka, xen-devel
[-- Attachment #1.1.1: Type: text/plain, Size: 1229 bytes --]
>
>
> xenversion: It happened with 4.4.1 from Debian Jessie, then we upgraded the
> Hypervisor to 4.8-rc from Debian Stretch. Symptoms are the same.
>
> So now its
>
> (d11) HVM Loader
>
> (d11) Detected Xen v4.8.0-rc
>
>
Debian Strech has now 4.8.0.rc5-1 and I think it's very stable....
>
> Kernel is our build:
> [ http://mirror.ip-projects.de/kernel/linux-image-4.8.10-xen_4810_amd64.deb |
> http://mirror.ip-projects.de/kernel/linux-image-4.8.10-xen_4810_amd64.deb ]
> (config file inside .deb as you know..)
>
> And yes its Linux 4.8.10 from linux.org. We maintain our own .deb packeges for
> all latest kernel.
>
why do you do this? esp (I assume) on Production machines? I have done it myself and consider it very risky...
you don't need the bleeding edge kernel - and what is the advantage?
Compiling the latest kernel requires a lot of experience and knowldege what the kernel hacker are doing and
you need to follow the lkml mailinglist which is another beast mighter than the xen-devel.
Have you considered an Orchestration tool etc ?
How do you let customer create or destroy machines?
Do you use PVM ?
How do you plan to pay for XEN-Support - have you considered Bitcoin ?
Gruesse
Juergen
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin]
2016-11-29 20:31 ` Juergen Schinker
@ 2016-11-30 7:07 ` Thomas Toka
0 siblings, 0 replies; 6+ messages in thread
From: Thomas Toka @ 2016-11-30 7:07 UTC (permalink / raw)
To: 'Juergen Schinker', 'xen-devel'
ii xen-hypervisor-4.8-amd64 4.8.0~rc5-1 amd64 Xen Hypervisor on AMD64
xentop shows: Xen 4.8.0-rc
We could not fix this with stable packages, so we are trying now to catch it by using latest software and
the help of xen-devel. Symptoms are the same in stable and unstable.
I maintain myself kernels since 2004. So I am quiet deep in this. Our advantage is to have a custom kernel
which supports latest hardware. We have hundreds of older hardware combinations and always also newest
boxes which need to be installable..
We use our own interface to let customers configure/install/start/restart their servers.
The interface does that what you would do in a shell to start/stop servers (create/destroy)
Mit freundlichen Grüßen
Thomas Toka
- Second Level Support -
IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn
Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: info@ip-projects.de
Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH
-----Ursprüngliche Nachricht-----
Von: Juergen Schinker [mailto:ba1020@homie.homelinux.net]
Gesendet: Dienstag, 29. November 2016 21:32
An: Thomas Toka <toka@ip-projects.de>; xen-devel <xen-devel@lists.xen.org>
Betreff: Re: [Xen-devel] PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin]
>
>
> xenversion: It happened with 4.4.1 from Debian Jessie, then we
> upgraded the Hypervisor to 4.8-rc from Debian Stretch. Symptoms are the same.
>
> So now its
>
> (d11) HVM Loader
>
> (d11) Detected Xen v4.8.0-rc
>
>
Debian Strech has now 4.8.0.rc5-1 and I think it's very stable....
>
> Kernel is our build:
> [
> http://mirror.ip-projects.de/kernel/linux-image-4.8.10-xen_4810_amd64.
> deb |
> http://mirror.ip-projects.de/kernel/linux-image-4.8.10-xen_4810_amd64.
> deb ] (config file inside .deb as you know..)
>
> And yes its Linux 4.8.10 from linux.org. We maintain our own .deb
> packeges for all latest kernel.
>
why do you do this? esp (I assume) on Production machines? I have done it myself and consider it very risky...
you don't need the bleeding edge kernel - and what is the advantage?
Compiling the latest kernel requires a lot of experience and knowldege what the kernel hacker are doing and
you need to follow the lkml mailinglist which is another beast mighter than the xen-devel.
Have you considered an Orchestration tool etc ?
How do you let customer create or destroy machines?
Do you use PVM ?
How do you plan to pay for XEN-Support - have you considered Bitcoin ?
Gruesse
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2016-11-30 7:07 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-29 20:01 PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin] Thomas Toka
2016-11-29 20:31 ` Juergen Schinker
2016-11-30 7:07 ` Thomas Toka
-- strict thread matches above, loose matches on Subject: below --
2016-11-29 19:43 IP-Projects - Support
2016-11-29 19:43 IP-Projects - Support
2016-11-27 8:52 Payed Xen Admin Michael Schinzel
2016-11-29 12:08 ` Dario Faggioli
2016-11-29 13:34 ` IP-Projects - Support
2016-11-29 18:16 ` PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin] Dario Faggioli
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).