From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42478) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1a1X03-0006ih-Fp for qemu-devel@nongnu.org; Wed, 25 Nov 2015 05:08:36 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1a1Wzz-0005lP-B2 for qemu-devel@nongnu.org; Wed, 25 Nov 2015 05:08:31 -0500 Received: from mailpro.odiso.net ([89.248.209.98]:38736) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1a1Wzz-0005hz-4I for qemu-devel@nongnu.org; Wed, 25 Nov 2015 05:08:27 -0500 Date: Wed, 25 Nov 2015 11:08:21 +0100 (CET) From: Alexandre DERUMIER Message-ID: <1726113162.173817902.1448446101295.JavaMail.zimbra@oxygem.tv> In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] poor virtio-scsi performance (fio testing) List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Vasiliy Tolstov Cc: qemu-devel Maybe could you try to create 2 disk in your vm, each with 1 dedicated ioth= read, then try to run fio on both disk at the same time, and see if performance i= mprove. But maybe they are some write overhead with lvmthin (because of copy on wri= te) and sheepdog. Do you have tried with classic lvm or raw file ? ----- Mail original ----- De: "Vasiliy Tolstov" =C3=80: "qemu-devel" Envoy=C3=A9: Jeudi 19 Novembre 2015 09:16:22 Objet: [Qemu-devel] poor virtio-scsi performance (fio testing) I'm test virtio-scsi on various kernels (with and without scsi-mq)=20 with deadline io scheduler (best performance). I'm test with lvm thin=20 volume and with sheepdog storage. Data goes to ssd that have on host=20 system is about 30K iops.=20 When i'm test via fio=20 [randrw]=20 blocksize=3D4k=20 filename=3D/dev/sdb=20 rw=3Drandrw=20 direct=3D1=20 buffered=3D0=20 ioengine=3Dlibaio=20 iodepth=3D32=20 group_reporting=20 numjobs=3D10=20 runtime=3D600=20 I'm always stuck at 11K-12K iops with sheepdog or with lvm.=20 When i'm switch to virtio-blk and enable data-plane i'm get around 16K iops= .=20 I'm try to enable virtio-scsi-data-plane but may be miss something=20 (get around 13K iops)=20 I'm use libvirt 1.2.16 and qemu 2.4.1=20 What can i do to get near 20K-25K iops?=20 (qemu testing drive have cache=3Dnone io=3Dnative)=20 --=20 Vasiliy Tolstov,=20 e-mail: v.tolstov@selfip.ru=20