From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60048) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZgXkw-0000V4-G0 for qemu-devel@nongnu.org; Mon, 28 Sep 2015 08:42:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZgXkr-0005dg-BI for qemu-devel@nongnu.org; Mon, 28 Sep 2015 08:42:10 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48367) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZgXkr-0005d4-40 for qemu-devel@nongnu.org; Mon, 28 Sep 2015 08:42:05 -0400 Date: Mon, 28 Sep 2015 13:42:00 +0100 From: Stefan Hajnoczi Message-ID: <20150928124200.GH8756@stefanha-thinkpad.redhat.com> References: <1441699228-25767-1-git-send-email-den@openvz.org> <1441699228-25767-5-git-send-email-den@openvz.org> <20150908110624.GE4230@noname.redhat.com> <55EEC61F.9050301@openvz.org> <20150908130556.GF4230@noname.redhat.com> <55EEEF50.2010200@openvz.org> <20150908144824.GG4230@noname.redhat.com> <20150910102734.GD13094@stefanha-thinkpad.redhat.com> <20150925123421.GA6511@work-vm> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150925123421.GA6511@work-vm> Subject: Re: [Qemu-devel] [PATCH 4/5] disk_deadlines: add control of requests time expiration List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Dr. David Alan Gilbert" Cc: Kevin Wolf , Stefan Hajnoczi , qemu-devel@nongnu.org, Raushaniya Maksudova , "Denis V. Lunev" On Fri, Sep 25, 2015 at 01:34:22PM +0100, Dr. David Alan Gilbert wrote: > * Stefan Hajnoczi (stefanha@gmail.com) wrote: > > On Tue, Sep 08, 2015 at 04:48:24PM +0200, Kevin Wolf wrote: > > > Am 08.09.2015 um 16:23 hat Denis V. Lunev geschrieben: > > > > On 09/08/2015 04:05 PM, Kevin Wolf wrote: > > > > >Am 08.09.2015 um 13:27 hat Denis V. Lunev geschrieben: > > > > >>interesting point. Yes, it flushes all requests and most likely > > > > >>hangs inside waiting requests to complete. But fortunately > > > > >>this happens after the switch to paused state thus > > > > >>the guest becomes paused. That's why I have missed this > > > > >>fact. > > > > >> > > > > >>This (could) be considered as a problem but I have no (good) > > > > >>solution at the moment. Should think a bit on. > > > > >Let me suggest a radically different design. Note that I don't say this > > > > >is necessarily how things should be done, I'm just trying to introduce > > > > >some new ideas and broaden the discussion, so that we have a larger set > > > > >of ideas from which we can pick the right solution(s). > > > > > > > > > >The core of my idea would be a new filter block driver 'timeout' that > > > > >can be added on top of each BDS that could potentially fail, like a > > > > >raw-posix BDS pointing to a file on NFS. This way most pieces of the > > > > >solution are nicely modularised and don't touch the block layer core. > > > > > > > > > >During normal operation the driver would just be passing through > > > > >requests to the lower layer. When it detects a timeout, however, it > > > > >completes the request it received with -ETIMEDOUT. It also completes any > > > > >new request it receives with -ETIMEDOUT without passing the request on > > > > >until the request that originally timed out returns. This is our safety > > > > >measure against anyone seeing whether or how the timed out request > > > > >modified data. > > > > > > > > > >We need to make sure that bdrv_drain() doesn't wait for this request. > > > > >Possibly we need to introduce a .bdrv_drain callback that replaces the > > > > >default handling, because bdrv_requests_pending() in the default > > > > >handling considers bs->file, which would still have the timed out > > > > >request. We don't want to see this; bdrv_drain_all() should complete > > > > >even though that request is still pending internally (externally, we > > > > >returned -ETIMEDOUT, so we can consider it completed). This way the > > > > >monitor stays responsive and background jobs can go on if they don't use > > > > >the failing block device. > > > > > > > > > >And then we essentially reuse the rerror/werror mechanism that we > > > > >already have to stop the VM. The device models would be extended to > > > > >always stop the VM on -ETIMEDOUT, regardless of the error policy. In > > > > >this state, the VM would even be migratable if you make sure that the > > > > >pending request can't modify the image on the destination host any more. > > > > > > > > > >Do you think this could work, or did I miss something important? > > > > > > > > > >Kevin > > > > could I propose even more radical solution then? > > > > > > > > My original approach was based on the fact that > > > > this could should be maintainable out-of-stream. > > > > If the patch will be merged - this boundary condition > > > > could be dropped. > > > > > > > > Why not to invent 'terror' field on BdrvOptions > > > > and process things in core block layer without > > > > a filter? RB Tree entry will just not created if > > > > the policy will be set to 'ignore'. > > > > > > 'terror' might not be the most fortunate name... ;-) > > > > > > The reason why I would prefer a filter driver is so the code and the > > > associated data structures are cleanly modularised and we can keep the > > > actual block layer core small and clean. The same is true for some other > > > functions that I would rather move out of the core into filter drivers > > > than add new cases (e.g. I/O throttling, backup notifiers, etc.), but > > > which are a bit harder to actually move because we already have old > > > interfaces that we can't break (we'll probably do it anyway eventually, > > > even if it needs a bit more compatibility code). > > > > > > However, it seems that you are mostly touching code that is maintained > > > by Stefan, and Stefan used to be a bit more open to adding functionality > > > to the core, so my opinion might not be the last word. > > > > I've been thinking more about the correctness of this feature: > > > > QEMU cannot cancel I/O because there is no Linux userspace API for doing > > so. Linux AIO's io_cancel(2) syscall is a nop since file systems don't > > implement a kiocb_cancel_fn. Sending a signal to a task blocked in > > O_DIRECT preadv(2)/pwritev(2) doesn't work either because the task is in > > uninterruptible sleep. > > There are things that work on some devices, but nothing generic. > For NBD/iSCSI/(ceph?) you should be able to issue a shutdown(2) on the socket > that connects to the server and that should call all existing IO to fail > quickly. Then you could do a drain and be done. This would > be very useful for the fault-tolerant uses (e.g. Wen Congyang's block replication). > > There are even ways of killing hard NFS mounts; for example adding > a unreachable route to the NFS server (ip route add unreachable hostname), > and then umount -f seems to cause I/O errors to tasks. (I can't find > a way to do a remount to change the hard flag). This isn't pretty but > it's a reasonable way of getting your host back to useable if one NFS > server has died. If you just throw away a socket, you don't know the state of the disk since some requests may have been handled by the server and others were not handled. So I doubt these approaches work because cleanly closing a connection requires communication between the client and server to determine that the connection was closed and which pending requests were completed. The trade-off is that the client no longer has DMA buffers that might get written to, but now you no longer know the state of the disk! Stefan