From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53271) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YqyQI-0003Mf-Ci for qemu-devel@nongnu.org; Sat, 09 May 2015 02:39:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YqyQE-00070m-Hr for qemu-devel@nongnu.org; Sat, 09 May 2015 02:39:42 -0400 Received: from [2a03:4000:1::4e2f:c7ac:d] (port=48645 helo=v220110690675601.yourvserver.net) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YqyQE-00070P-6S for qemu-devel@nongnu.org; Sat, 09 May 2015 02:39:38 -0400 Message-ID: <554DABA5.3020405@weilnetz.de> Date: Sat, 09 May 2015 08:39:33 +0200 From: Stefan Weil MIME-Version: 1.0 References: <1430971496-32659-1-git-send-email-phoeagon@gmail.com> <1431011818-15822-1-git-send-email-phoeagon@gmail.com> <554CB6C6.3060809@redhat.com> <20150508135512.GJ4318@noname.redhat.com> <554D2A03.3080201@weilnetz.de> In-Reply-To: Content-Type: multipart/alternative; boundary="------------000604040104020701000609" Subject: Re: [Qemu-devel] [PATCH v4] block/vdi: Use bdrv_flush after metadata updates List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: phoeagon , Kevin Wolf , Max Reitz Cc: qemu-devel@nongnu.org, qemu-block@nongnu.org This is a multi-part message in MIME format. --------------000604040104020701000609 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Am 09.05.2015 um 05:59 schrieb phoeagon: > BTW, how do you usually measure the time to install a Linux distro > within? Most distros ISOs do NOT have unattended installation ISOs in > place. (True I can bake my own ISOs for this...) But do you have any > ISOs made ready for this purpose? > > On Sat, May 9, 2015 at 11:54 AM phoeagon > wrote: > > Thanks. Dbench does not logically allocate new disk space all the > time, because it's a FS level benchmark that creates file and > deletes them. Therefore it also depends on the guest FS, say, a > btrfs guest FS allocates about 1.8x space of that from EXT4, due > to its COW nature. It does cause the FS to allocate some space > during about 1/3 of the test duration I think. But this does not > mitigate it too much because a FS often writes in a stride rather > than consecutively, which causes write amplification at allocation > times. > > So I tested it with qemu-img convert from a 400M raw file: > zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t > unsafe -O vdi /run/shm/rand 1.vdi > > real0m0.402s > user0m0.206s > sys0m0.202s > zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t > writeback -O vdi /run/shm/rand 1.vdi > I assume that the target file /run/shm/rand 1.vdi is not on a physical disk. Then flushing data will be fast. For real hard disks (not SSDs) the situation is different: the r/w heads of the hard disk have to move between data location and the beginning of the written file where the metadata is written, so I expect a larger effect there. For measuring installation time of an OS, I'd take a reproducible installation source (hard disk or DVD, no network connection) and take the time for those parts of the installation where many packets are installed without any user interaction. For Linux you won't need a stop watch, because the packet directories in /usr/share/doc have nice timestamps. Stefan --------------000604040104020701000609 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 7bit
Am 09.05.2015 um 05:59 schrieb phoeagon:
BTW, how do you usually measure the time to install a Linux distro within? Most distros ISOs do NOT have unattended installation ISOs in place. (True I can bake my own ISOs for this...) But do you have any ISOs made ready for this purpose?

On Sat, May 9, 2015 at 11:54 AM phoeagon <phoeagon@gmail.com> wrote:
Thanks. Dbench does not logically allocate new disk space all the time, because it's a FS level benchmark that creates file and deletes them. Therefore it also depends on the guest FS, say, a btrfs guest FS allocates about 1.8x space of that from EXT4, due to its COW nature. It does cause the FS to allocate some space during about 1/3 of the test duration I think. But this does not mitigate it too much because a FS often writes in a stride rather than consecutively, which causes write amplification at allocation times.

So I tested it with qemu-img convert from a 400M raw file:
zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t unsafe -O vdi /run/shm/rand 1.vdi

real 0m0.402s
user 0m0.206s
sys 0m0.202s
zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t writeback -O vdi /run/shm/rand 1.vdi


I assume that the target file /run/shm/rand 1.vdi is not on a physical disk.
Then flushing data will be fast. For real hard disks (not SSDs) the situation is
different: the r/w heads of the hard disk have to move between data location
and the beginning of the written file where the metadata is written, so
I expect a larger effect there.

For measuring installation time of an OS, I'd take a reproducible installation
source (hard disk or DVD, no network connection) and take the time for
those parts of the installation where many packets are installed without
any user interaction. For Linux you won't need a stop watch, because the
packet directories in /usr/share/doc have nice timestamps.

Stefan

--------------000604040104020701000609--