From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60274) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YqzNq-0001ic-34 for qemu-devel@nongnu.org; Sat, 09 May 2015 03:41:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YqzNo-00042t-Ib for qemu-devel@nongnu.org; Sat, 09 May 2015 03:41:14 -0400 MIME-Version: 1.0 References: <1430971496-32659-1-git-send-email-phoeagon@gmail.com> <1431011818-15822-1-git-send-email-phoeagon@gmail.com> <554CB6C6.3060809@redhat.com> <20150508135512.GJ4318@noname.redhat.com> <554D2A03.3080201@weilnetz.de> <554DABA5.3020405@weilnetz.de> In-Reply-To: <554DABA5.3020405@weilnetz.de> From: phoeagon Date: Sat, 09 May 2015 07:41:10 +0000 Message-ID: Content-Type: multipart/alternative; boundary=001a113f323ccd7ca90515a1443c Subject: Re: [Qemu-devel] [PATCH v4] block/vdi: Use bdrv_flush after metadata updates List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Weil , Kevin Wolf , Max Reitz Cc: qemu-devel@nongnu.org, qemu-block@nongnu.org --001a113f323ccd7ca90515a1443c Content-Type: text/plain; charset=UTF-8 Full Linux Mint (17.1) Installation with writeback: With VDI extra sync 4min35s Vanilla: 3min17s which is consistent with 'qemu-img convert' (slightly less overhead due to some phases in installation is actually CPU bound). Still much faster than other "sync-after-metadata" formats like VPC (vanilla VPC 7min43s) The thing is he who needs to set up a new Linux system every day probably have pre-installed images to start with, and others just don't install an OS every day. On Sat, May 9, 2015 at 2:39 PM Stefan Weil wrote: > Am 09.05.2015 um 05:59 schrieb phoeagon: > > BTW, how do you usually measure the time to install a Linux distro within? > Most distros ISOs do NOT have unattended installation ISOs in place. (True > I can bake my own ISOs for this...) But do you have any ISOs made ready for > this purpose? > > On Sat, May 9, 2015 at 11:54 AM phoeagon wrote: > >> Thanks. Dbench does not logically allocate new disk space all the time, >> because it's a FS level benchmark that creates file and deletes them. >> Therefore it also depends on the guest FS, say, a btrfs guest FS allocates >> about 1.8x space of that from EXT4, due to its COW nature. It does cause >> the FS to allocate some space during about 1/3 of the test duration I >> think. But this does not mitigate it too much because a FS often writes in >> a stride rather than consecutively, which causes write amplification at >> allocation times. >> >> So I tested it with qemu-img convert from a 400M raw file: >> zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t unsafe >> -O vdi /run/shm/rand 1.vdi >> >> real 0m0.402s >> user 0m0.206s >> sys 0m0.202s >> zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t >> writeback -O vdi /run/shm/rand 1.vdi >> > > > I assume that the target file /run/shm/rand 1.vdi is not on a physical > disk. > Then flushing data will be fast. For real hard disks (not SSDs) the > situation is > different: the r/w heads of the hard disk have to move between data > location > and the beginning of the written file where the metadata is written, so > I expect a larger effect there. > > For measuring installation time of an OS, I'd take a reproducible > installation > source (hard disk or DVD, no network connection) and take the time for > those parts of the installation where many packets are installed without > any user interaction. For Linux you won't need a stop watch, because the > packet directories in /usr/share/doc have nice timestamps. > > > Stefan > > --001a113f323ccd7ca90515a1443c Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Full Linux Mint (17.1) Installation with writeback:
With VDI extra sync 4min35s
Vanilla: 3min17s

which is consistent with 'qemu-img convert' (slightly less ov= erhead due to some phases in installation is actually CPU bound).
Still much faster than other "sync-after-metadata" formats like = VPC (vanilla VPC 7min43s)
The thing is he who needs to set up a n= ew Linux system every day probably have pre-installed images to start with,= and others just don't install an OS every day.

=

On Sat, May 9, 2015 at 2:39 PM Ste= fan Weil <sw@weilnet= z.de> wrote:
Am 09.05.2015 um 05:59 schrieb phoeagon:
BTW, how do you usually measure the time to install a Linux distro within? Most distros ISOs do NOT have unattended installation ISOs in place. (True I can bake my own ISOs for this...) But do you have any ISOs made ready for this purpose?

On Sat, May 9, 2015 at 11:54 AM phoeagon <phoeago= n@gmail.com> wrote:
Thanks. Dbench does not logically allocate new disk space all the time, because it's a FS level benchmark that creates file and deletes them. Therefore it also depends on the guest FS, say, a btrfs guest FS allocates about 1.8x space of that from EXT4, due to its COW nature. It does cause the FS to allocate some space during about 1/3 of the test duration I think. But this does not mitigate it too much because a FS often writes in a stride rather than consecutively, which causes write amplification at allocation times.

So I tested it with qemu-img convert from a 400M raw file:
zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t unsafe -O vdi /run/shm/rand 1.vdi

real 0m0.4= 02s
user 0m0.2= 06s
sys 0m0.20= 2s
zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t writeback -O vdi /run/shm/rand 1.vdi


I assume that the target file /run/shm/rand 1.vdi is not on a physical disk.
Then flushing data will be fast. For real hard disks (not SSDs) the situation is
different: the r/w heads of the hard disk have to move between data location
and the beginning of the written file where the metadata is written, so
I expect a larger effect there.

For measuring installation time of an OS, I'd take a reproducible installation
source (hard disk or DVD, no network connection) and take the time for
those parts of the installation where many packets are installed without
any user interaction. For Linux you won't need a stop watch, becaus= e the
packet directories in /usr/share/doc have nice timestamps.


Stefan

--001a113f323ccd7ca90515a1443c--