From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54726) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZZH3x-0003az-SN for qemu-devel@nongnu.org; Tue, 08 Sep 2015 07:27:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZZH3s-00052f-WE for qemu-devel@nongnu.org; Tue, 08 Sep 2015 07:27:45 -0400 Received: from relay.parallels.com ([195.214.232.42]:33102) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZZH3s-00052S-Nr for qemu-devel@nongnu.org; Tue, 08 Sep 2015 07:27:40 -0400 References: <1441699228-25767-1-git-send-email-den@openvz.org> <1441699228-25767-5-git-send-email-den@openvz.org> <20150908110624.GE4230@noname.redhat.com> From: "Denis V. Lunev" Message-ID: <55EEC61F.9050301@openvz.org> Date: Tue, 8 Sep 2015 14:27:27 +0300 MIME-Version: 1.0 In-Reply-To: <20150908110624.GE4230@noname.redhat.com> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 4/5] disk_deadlines: add control of requests time expiration List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: Stefan Hajnoczi , qemu-devel@nongnu.org, Raushaniya Maksudova On 09/08/2015 02:06 PM, Kevin Wolf wrote: > Am 08.09.2015 um 10:00 hat Denis V. Lunev geschrieben: >> From: Raushaniya Maksudova >> >> If disk-deadlines option is enabled for a drive, one controls time >> completion of this drive's requests. The method is as follows (further >> assume that this option is enabled). >> >> Every drive has its own red-black tree for keeping its requests. >> Expiration time of the request is a key, cookie (as id of request) is an >> appropriate node. Assume that every requests has 8 seconds to be completed. >> If request was not accomplished in time for some reasons (server crash or >> smth else), timer of this drive is fired and an appropriate callback >> requests to stop Virtial Machine (VM). >> >> VM remains stopped until all requests from the disk which caused VM's >> stopping are completed. Furthermore, if there is another disks whose >> requests are waiting to be completed, do not start VM : wait completion >> of all "late" requests from all disks. >> >> Signed-off-by: Raushaniya Maksudova >> Signed-off-by: Denis V. Lunev >> CC: Stefan Hajnoczi >> CC: Kevin Wolf >> + disk_deadlines->expired_tree = true; >> + need_vmstop = !atomic_fetch_inc(&num_requests_vmstopped); >> + pthread_mutex_unlock(&disk_deadlines->mtx_tree); >> + >> + if (need_vmstop) { >> + qemu_system_vmstop_request_prepare(); >> + qemu_system_vmstop_request(RUN_STATE_PAUSED); >> + } >> +} > What behaviour does this result in? If I understand correctly, this is > an indirect call of do_vm_stop(), which involves a bdrv_drain_all(). In > this case, qemu would completely block (including unresponsive monitor) > until the request can complete. > > Is this what you are seeing with this patch, or why doesn't the > bdrv_drain_all() call cause such effects? > > Kevin interesting point. Yes, it flushes all requests and most likely hangs inside waiting requests to complete. But fortunately this happens after the switch to paused state thus the guest becomes paused. That's why I have missed this fact. This (could) be considered as a problem but I have no (good) solution at the moment. Should think a bit on. Nice catch, though! Den