From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KqT8q-0002AF-RQ for qemu-devel@nongnu.org; Thu, 16 Oct 2008 09:43:52 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KqT8o-00028S-VB for qemu-devel@nongnu.org; Thu, 16 Oct 2008 09:43:52 -0400 Received: from [199.232.76.173] (port=50369 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KqT8o-00028H-Pr for qemu-devel@nongnu.org; Thu, 16 Oct 2008 09:43:50 -0400 Received: from yw-out-1718.google.com ([74.125.46.152]:4149) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1KqT8o-00087C-AG for qemu-devel@nongnu.org; Thu, 16 Oct 2008 09:43:50 -0400 Received: by yw-out-1718.google.com with SMTP id 6so657328ywa.82 for ; Thu, 16 Oct 2008 06:43:49 -0700 (PDT) Message-ID: <48F74511.8080700@codemonkey.ws> Date: Thu, 16 Oct 2008 08:43:45 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [RFC] Disk integrity in QEMU References: <48EE38B9.2050106@codemonkey.ws> <20081013170610.GF21410@us.ibm.com> <6A99DBA5-D422-447D-BF9D-019FB394E6C6@lvivier.info> <20081013194328.GJ21410@us.ibm.com> <148FE536-F397-4F51-AE3F-C94E4F1F5D4E@lvivier.info> <20081013210509.GL21410@us.ibm.com> <1224076205.4150.17.camel@frecb07144> <1224152681.4168.5.camel@frecb07144> In-Reply-To: <1224152681.4168.5.camel@frecb07144> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Chris Wright , Mark McLoughlin , Ryan Harper Laurent Vivier wrote: > Hi, > > I've made a benchmark using a database: > mysql and sysbench in OLTP mode. > > cache=off seems to be the best choice in this case... > It would be interesting for you to run the same workload under KVM. > mysql database > http://sysbench.sourceforge.net > > sysbench --test=oltp > > 200,000 requests on 2,000,000 rows table. > > | total time | per-request stat (ms) | > | (seconds) | min | avg | max | > -----------------+------------+-------+-------+-------+ > baremetal | 208.6237 | 2.5 | 16.7 | 942.6 | > -----------------+------------+-------+-------+-------+ > cache=on | 642.2962 | 2.5 | 51.4 | 326.9 | > -----------------+------------+-------+-------+-------+ > cache=on,O_DSYNC | 646.6570 | 2.7 | 51.7 | 347.0 | > -----------------+------------+-------+-------+-------+ > cache=off | 635.4424 | 2.9 | 50.8 | 399.5 | > -----------------+------------+-------+-------+-------+ > Because you're talking about 1/3% of native performance. This means that you may be dominated by things like CPU overhead verses actual IO throughput. Regards, Anthony Liguori > Laurent >