From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KpmuS-00052G-Pm for qemu-devel@nongnu.org; Tue, 14 Oct 2008 12:38:12 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KpmuR-00050o-JW for qemu-devel@nongnu.org; Tue, 14 Oct 2008 12:38:11 -0400 Received: from [199.232.76.173] (port=56048 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KpmuR-00050g-DL for qemu-devel@nongnu.org; Tue, 14 Oct 2008 12:38:11 -0400 Received: from mx2.redhat.com ([66.187.237.31]:44572) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1KpmuR-0008Ag-0p for qemu-devel@nongnu.org; Tue, 14 Oct 2008 12:38:11 -0400 Message-ID: <48F4CAB8.2090404@redhat.com> Date: Tue, 14 Oct 2008 18:37:12 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [RFC] Disk integrity in QEMU References: <48EE38B9.2050106@codemonkey.ws> <20081013170610.GF21410@us.ibm.com> <6A99DBA5-D422-447D-BF9D-019FB394E6C6@lvivier.info> <20081013194328.GJ21410@us.ibm.com> In-Reply-To: <20081013194328.GJ21410@us.ibm.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Chris Wright , Mark McLoughlin , kvm-devel , Laurent Vivier , Ryan Harper , Laurent Vivier Ryan Harper wrote: > fio --name=guestrun --filename=/dev/vda --rw=write --bs=${SIZE} > --ioengine=libaio --direct=1 --norandommap --numjobs=1 --group_reporting > --thread --size=1g --write_lat_log --write_bw_log --iodepth=74 > > How large is /dev/vda? Also, I think you're doing sequential access, which means sequential runs will improve as data is brought into cache. I suggest random access, with a very large /dev/vga. >> OK, but in this case the size of the cache for "cache=off" is the size >> of the guest cache whereas in the other cases the size of the cache is >> the size of the guest cache + the size of the host cache, this is not >> fair... >> > > it isn't supposed to be fair, cache=off is O_DIRECT, we're reading from > the device, we *want* to be able to lean on the host cache to read the > data, pay once and benefit in other guests if possible. > > My assumption is that the memory would be better utilized in the guest (which makes better eviction choices, and which is a lot closer to the application). We'd need to run fio in non-direct mode to show this. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.