From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49136) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZZGjQ-0006Ok-5N for qemu-devel@nongnu.org; Tue, 08 Sep 2015 07:06:36 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZZGjM-0005CG-70 for qemu-devel@nongnu.org; Tue, 08 Sep 2015 07:06:31 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58420) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZZGjM-0005C8-1C for qemu-devel@nongnu.org; Tue, 08 Sep 2015 07:06:28 -0400 Date: Tue, 8 Sep 2015 13:06:24 +0200 From: Kevin Wolf Message-ID: <20150908110624.GE4230@noname.redhat.com> References: <1441699228-25767-1-git-send-email-den@openvz.org> <1441699228-25767-5-git-send-email-den@openvz.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1441699228-25767-5-git-send-email-den@openvz.org> Subject: Re: [Qemu-devel] [PATCH 4/5] disk_deadlines: add control of requests time expiration List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Denis V. Lunev" Cc: Stefan Hajnoczi , qemu-devel@nongnu.org, Raushaniya Maksudova Am 08.09.2015 um 10:00 hat Denis V. Lunev geschrieben: > From: Raushaniya Maksudova > > If disk-deadlines option is enabled for a drive, one controls time > completion of this drive's requests. The method is as follows (further > assume that this option is enabled). > > Every drive has its own red-black tree for keeping its requests. > Expiration time of the request is a key, cookie (as id of request) is an > appropriate node. Assume that every requests has 8 seconds to be completed. > If request was not accomplished in time for some reasons (server crash or > smth else), timer of this drive is fired and an appropriate callback > requests to stop Virtial Machine (VM). > > VM remains stopped until all requests from the disk which caused VM's > stopping are completed. Furthermore, if there is another disks whose > requests are waiting to be completed, do not start VM : wait completion > of all "late" requests from all disks. > > Signed-off-by: Raushaniya Maksudova > Signed-off-by: Denis V. Lunev > CC: Stefan Hajnoczi > CC: Kevin Wolf > + disk_deadlines->expired_tree = true; > + need_vmstop = !atomic_fetch_inc(&num_requests_vmstopped); > + pthread_mutex_unlock(&disk_deadlines->mtx_tree); > + > + if (need_vmstop) { > + qemu_system_vmstop_request_prepare(); > + qemu_system_vmstop_request(RUN_STATE_PAUSED); > + } > +} What behaviour does this result in? If I understand correctly, this is an indirect call of do_vm_stop(), which involves a bdrv_drain_all(). In this case, qemu would completely block (including unresponsive monitor) until the request can complete. Is this what you are seeing with this patch, or why doesn't the bdrv_drain_all() call cause such effects? Kevin