From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54159) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b04y0-0000KX-F0 for qemu-devel@nongnu.org; Tue, 10 May 2016 06:32:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b04xy-000864-CO for qemu-devel@nongnu.org; Tue, 10 May 2016 06:32:39 -0400 Sender: Paolo Bonzini References: <1460046816-102846-1-git-send-email-pbonzini@redhat.com> <1460046816-102846-9-git-send-email-pbonzini@redhat.com> <20160419090953.GA16312@stefanha-x1.localdomain> <5730BB70.8010600@redhat.com> <20160510093040.GB11408@stefanha-x1.localdomain> <20160510094033.GI4921@noname.str.redhat.com> From: Paolo Bonzini Message-ID: <5731B8BB.3020600@redhat.com> Date: Tue, 10 May 2016 12:32:27 +0200 MIME-Version: 1.0 In-Reply-To: <20160510094033.GI4921@noname.str.redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [Qemu-block] [PATCH v4 8/8] linux-aio: share one LinuxAioState within an AioContext List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf , Stefan Hajnoczi Cc: qemu-devel@nongnu.org, famz@redhat.com, qemu-block@nongnu.org, stefanha@redhat.com On 10/05/2016 11:40, Kevin Wolf wrote: > > Regarding performance, I'm thinking about a guest with 8 disks (queue > > depth 32). The worst case is when the guest submits 32 requests at once > > but the Linux AIO event limit has already been reached. Then the disk > > is starved until other disks' requests complete. > > Sounds like a valid concern. Oh, so you're concerned about the non-dataplane case. My suspicion is that, with such a number of outstanding I/O, you probably have bad performance unless you use dataplane. Also, aio=threads has had a 64 I/O limit for years and we've never heard about problems with stuck I/O. But I agree that it's one area to keep an eye on. Paolo