From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:37810) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TQbuw-0004y9-FI for qemu-devel@nongnu.org; Tue, 23 Oct 2012 06:41:14 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TQbus-0001Ju-8S for qemu-devel@nongnu.org; Tue, 23 Oct 2012 06:41:02 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59950) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TQbus-0001Jd-0q for qemu-devel@nongnu.org; Tue, 23 Oct 2012 06:40:58 -0400 Message-ID: <5086742E.1030702@redhat.com> Date: Tue, 23 Oct 2012 12:40:46 +0200 From: Kevin Wolf MIME-Version: 1.0 References: In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 2/3] nbd: Explicitly disconnect and fail inflight I/O requests on error, then reconnect next I/O request. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: nick@bytemark.co.uk Cc: pbonzini@redhat.com, qemu-devel@nongnu.org Am 22.10.2012 13:09, schrieb nick@bytemark.co.uk: > > The previous behaviour when an NBD connection broke was to fail just the broken I/O request > and (sometimes) never unlock send_mutex. Now we explicitly call nbd_teardown_connection and > fail all NBD requests in the "inflight" state - this allows werror/rerror settings to be > applied to those requests all at once. > > When a new request (or a request that was pending, but not yet inflight) is issued against > the NBD driver, if we're not connected to the NBD server, we attempt to connect (and fail > the new request immediately if that doesn't work). Doesn't this block the vcpu while qemu is trying to establish a new connection? When the network is down, I think this can take quite a while. I would actually expect that this is one of the cases where qemu as a whole would seem to hang. Kevin