From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53361) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zqklw-0004XQ-Gp for qemu-devel@nongnu.org; Mon, 26 Oct 2015 12:37:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Zqklr-00048e-IQ for qemu-devel@nongnu.org; Mon, 26 Oct 2015 12:37:24 -0400 Received: from mail-pa0-x236.google.com ([2607:f8b0:400e:c03::236]:34301) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zqklr-00048N-Cn for qemu-devel@nongnu.org; Mon, 26 Oct 2015 12:37:19 -0400 Received: by padhk11 with SMTP id hk11so193224826pad.1 for ; Mon, 26 Oct 2015 09:37:18 -0700 (PDT) Sender: Paolo Bonzini References: <562E48B9.6090600@redhat.com> From: Paolo Bonzini Message-ID: <562E56B8.2030109@redhat.com> Date: Mon, 26 Oct 2015 17:37:12 +0100 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] 4k seq read splitting for virtio-blk - possible workarounds? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Andrey Korolyov Cc: Sergey Fionov , Jens Axboe , Jeff Moyer , Peter Lieven , "qemu-devel@nongnu.org" On 26/10/2015 17:31, Andrey Korolyov wrote: >> the virtio block device is always splitting a single read >> range request to 4k ones, bringing the overall performance of the >> sequential reads far below virtio-scsi. >> >> How does the blktrace look like in the guest? > > Yep, thanks for suggestion. It looks now like a pure driver issue: > > Reads Queued: 11008, 44032KiB Writes Queued: 0, 0KiB > Read Dispatches: 11008, 44032KiB Write Dispatches: 0, 0KiB > > vs > > Reads Queued: 185728, 742912KiB Writes Queued: 0, 0KiB > Read Dispatches: 2902, 742912KiB Write Dispatches: 0, 0KiB > > Because guest virtio-blk driver lacks *any* blk scheduler management, > this is kinda logical. Requests for scsi backend are dispatched in ^^^^^^^^^^ queued you mean? > single block-sized chunks as well, but they are mostly merged by a > scheduler before being passed to the device layer. Could be there any > improvements over the situation except writing an underlay b/w virtio > emulator backend and the real storage? This is probably the fall-out of converting the virtio-blk to use blk-mq, which was premature to say the least. Jeff Moyer was working on it, but I'm not sure if this has been merged. Andrey, what kernel are you using? Paolo