From mboxrd@z Thu Jan 1 00:00:00 1970 From: Atom2 Subject: Re: [Xen-users] substantial shutdown delay for PV guests with PCI -passthrough Date: Tue, 22 Apr 2014 14:02:46 +0200 Message-ID: <53565A66.7090208@web2web.at> References: <5329A3C0.3000609@web2web.at> <21289.48007.604643.322509@mariner.uk.xensource.com> <532A530D.1050504@web2web.at> <21290.54937.269235.163107@mariner.uk.xensource.com> <532B425F.4010701@web2web.at> <21292.32952.592138.804266@mariner.uk.xensource.com> <532C9556.9070806@web2web.at> <21308.8800.653127.798603@mariner.uk.xensource.com> <533C29FD.1030809@web2web.at> <53519D82.9020002@web2web.at> <20140419001237.GA16683@localhost.localdomain> <5352C779.1050700@web2web.at> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: George Dunlap Cc: Ian Campbell , Ian Jackson , "xen-devel@lists.xen.org" , David Vrabel , xen-users@lists.xenproject.org, Boris Ostrovsky , Roger Pau Monne List-Id: xen-devel@lists.xenproject.org Am 22.04.14 12:44, schrieb George Dunlap: > On Sat, Apr 19, 2014 at 7:59 PM, Atom2 wrote: >> Am 19.04.14 02:12, schrieb Konrad Rzeszutek Wilk: >>> I ran an PV guest with PCI passthrough this week and it had no trouble - >>> didn't see 10 seconds or so. But I did the shutdown from within the >>> guest (poweroff). >> >> For me it makes no difference timewise whether I issue a >> xl shutdown guest >> from dom0 or whether I issue >> shutdown -h now >> from a connection (i.e. ssh or screen or console) to the guest. The main >> difference being that for the latter the delay is visible whereas for the >> former, the delay is not so obvious because 'xl shutdown guest' from dom0 >> due to its asynchronous nature returns immediately even when the guest is >> still alive. >> >> One difference that I have noticed however is that for the shutdown from >> _within_ the guest (i.e. shutdown -h now) the state of the guest remains 's' >> in 'xl list' from the time the "system halted" message appears on screen >> until the prompt returns in dom0 whereas for a shutdown from dom0 with 'xl >> shutdown guest' the state changes from 's' to 'ps' for a number of seconds >> before it is finally gone. > > Does it look anything like this? > > marc.info/?i= > > (the log in question is /var/log/xen/xl-$DOMAINNAME.log) > > -George Not really (unless there's a specific command line option required to get your output) - at least to my eye the messages in my log file look rather different: _______________________________________________________________________ Waiting for domain voip (domid 3) to die [pid 2274] Domain 3 has shut down, reason code 0 0x0 Action for shutdown reason code 0 is destroy Domain 3 needs to be cleaned up: destroying the domain libxl: error: libxl_pci.c:1250:do_pci_remove: xc_domain_irq_permission irq=17 libxl: error: libxl_device.c:1134:libxl__wait_for_backend_deprecated: Backend /local/domain/0/backend/pci/3/0 not ready (state 7) libxl: error: libxl_device.c:1138:libxl__wait_for_backend_deprecated: FE /local/domain/3/device/pci/0 state 6 libxl: error: libxl_pci.c:1250:do_pci_remove: xc_domain_irq_permission irq=16 libxl: error: libxl_device.c:1134:libxl__wait_for_backend_deprecated: Backend /local/domain/0/backend/pci/3/0 not ready (state 7) libxl: error: libxl_device.c:1138:libxl__wait_for_backend_deprecated: FE /local/domain/3/device/pci/0 state 6 libxl: error: libxl_pci.c:1250:do_pci_remove: xc_domain_irq_permission irq=23 libxl: error: libxl_device.c:1134:libxl__wait_for_backend_deprecated: Backend /local/domain/0/backend/pci/3/0 not ready (state 7) libxl: error: libxl_device.c:1138:libxl__wait_for_backend_deprecated: FE /local/domain/3/device/pci/0 state 6 libxl: error: libxl_device.c:894:device_backend_callback: unable to remove device with path /local/domain/0/backend/pci/3/0 libxl: error: libxl.c:1452:devices_destroy_cb: libxl__devices_destroy failed for 3 Done. Exiting now Please note: some of my messages may be the result of Ian Jackson's (debug) patches which are still active in my environment. Thanks Atom2