From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59588) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VIwwk-0004ra-Ha for qemu-devel@nongnu.org; Mon, 09 Sep 2013 04:35:52 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VIwwe-0005Cc-EZ for qemu-devel@nongnu.org; Mon, 09 Sep 2013 04:35:46 -0400 Date: Mon, 9 Sep 2013 10:35:22 +0200 From: Kevin Wolf Message-ID: <20130909083522.GA3110@dhcp-200-207.str.redhat.com> References: <2013090609312026502832@163.com> <20130906103837.GH2588@dhcp-200-207.str.redhat.com> <201309090957395153945@163.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <201309090957395153945@163.com> Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] savevm too slow List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: xuanmao_001 Cc: mreitz , quintela , qemu-devel , stefanha , qemu-discuss Am 09.09.2013 um 03:57 hat xuanmao_001 geschrieben: > >> the other question: when I change the buffer size #define IO_BUF_SIZ= E 32768 > >> to #define IO_BUF_SIZE (1 * 1024 * 1024), the savevm is more quickly. > =20 > > Is this for cache=3Dunsafe as well? > =20 > > Juan, any specific reason for using 32k? I think it would be better t= o > > have a multiple of the qcow2 cluster size, otherwise we get COW for t= he > > empty part of newly allocated clusters. If we can't make it dynamic, > > using at least fixed 64k to match the qcow2 default would probably > > improve things a bit. > =20 > with cache=3Dwriteback. Is there any risk for setting cache=3Dwritebac= k with > IO_BUF_SIZE 1M ? No. Using a larger buffer size should be safe. Kevin > =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81= =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94= =81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81= =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94= =81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81= =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94= =81=E2=94=81=E2=94=81=E2=94=81=E2=94=81 > xuanmao_001 > =20 > From: Kevin Wolf > Date: 2013-09-06 18:38 > To: xuanmao_001 > CC: qemu-discuss; qemu-devel; quintela; stefanha; mreitz > Subject: Re: savevm too slow > Am 06.09.2013 um 03:31 hat xuanmao_001 geschrieben: > > Hi, qemuers: > > =20 > > > I found that the guest disk file cache mode will affect to the time of= savevm. > > =20 > > the cache 'writeback' too slow. but the cache 'unsafe' is as fast as = it can, > > less than 10 seconds. > > =20 > > here is the example I use virsh: > > @cache with writeback: > > #the first snapshot > > real 0m21.904s > > user 0m0.006s > > sys 0m0.008s > > =20 > > #the secondary snapshot > > real 2m11.624s > > user 0m0.013s > > sys 0m0.008s > > =20 > > @cache with unsafe: > > #the first snapshot > > real 0m0.730s > > user 0m0.006s > > sys 0m0.005s > > =20 > > #the secondary snapshot > > real 0m1.296s > > user 0m0.002s > > sys 0m0.008s > =20 > I sent patches that should eliminate the difference between the first > and second snapshot at least. > =20 > > so, what the difference between them when using different cache. > =20 > cache=3Dunsafe ignores any flush requests. It's possible that there is > potential for optimisation with cache=3Dwriteback, i.e. it sends flush > requests that aren't necessary in fact. This is something that I haven'= t > checked yet. > =20 > > the other question: when I change the buffer size #define IO_BUF_SIZE= 32768 > > to #define IO_BUF_SIZE (1 * 1024 * 1024), the savevm is more quickly. > =20 > Is this for cache=3Dunsafe as well? > =20 > Juan, any specific reason for using 32k? I think it would be better to > have a multiple of the qcow2 cluster size, otherwise we get COW for the > empty part of newly allocated clusters. If we can't make it dynamic, > using at least fixed 64k to match the qcow2 default would probably > improve things a bit. > =20 > Kevin