From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37589) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VIxaD-0000Fj-Ea for qemu-devel@nongnu.org; Mon, 09 Sep 2013 05:16:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VIxa7-0007Ta-8m for qemu-devel@nongnu.org; Mon, 09 Sep 2013 05:16:33 -0400 Date: Mon, 9 Sep 2013 11:16:10 +0200 From: Kevin Wolf Message-ID: <20130909091610.GC3110@dhcp-200-207.str.redhat.com> References: <2013090609312026502832@163.com> <20130906103837.GH2588@dhcp-200-207.str.redhat.com> <201309090957395153945@163.com> <20130909083522.GA3110@dhcp-200-207.str.redhat.com> <2013090916472909397914@163.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <2013090916472909397914@163.com> Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] savevm too slow List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: xuanmao_001 Cc: mreitz , quintela , qemu-devel , stefanha , qemu-discuss Am 09.09.2013 um 10:47 hat xuanmao_001 geschrieben: > > I sent patches that should eliminate the difference between the first > > and second snapshot at least. > =20 > where I can find the patches that can > eliminate the difference between the first > and second snapshot ? Does they fit qemu-kvm-1.0,1 ? I sent them to you on Friday, the first email has the following subject line: [PATCH 0/2] qcow2: Discard VM state in active L1 after creating snapshot This patch series is for current git master, and chances are that it would work for qemu 1.6 as well. It will most likely not apply for qemu 1.0, which is almost two years old. Kevin > =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81= =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94= =81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81= =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94= =81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81= =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94= =81=E2=94=81=E2=94=81=E2=94=81=E2=94=81 > xuanmao_001 > =20 > From: Kevin Wolf > Date: 2013-09-09 16:35 > To: xuanmao_001 > CC: qemu-discuss; qemu-devel; quintela; stefanha; mreitz > Subject: Re: Re: savevm too slow > Am 09.09.2013 um 03:57 hat xuanmao_001 geschrieben: > > >> the other question: when I change the buffer size # > define IO_BUF_SIZE 32768 > > >> to #define IO_BUF_SIZE (1 * 1024 * 1024), the savevm is more quick= ly. > > =20 > > > Is this for cache=3Dunsafe as well? > > =20 > > > Juan, any specific reason for using 32k? I think it would be better= to > > > have a multiple of the qcow2 cluster size, otherwise we get COW for= the > > > empty part of newly allocated clusters. If we can't make it dynamic= , > > > using at least fixed 64k to match the qcow2 default would probably > > > improve things a bit. > > =20 > > with cache=3Dwriteback. Is there any risk for setting cache=3Dwriteb= ack with > > IO_BUF_SIZE 1M ? > =20 > No. Using a larger buffer size should be safe. > =20 > Kevin > =20 > > > =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81= =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94= =81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81= =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94= =81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81= =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94= =81=E2=94=81=E2=94=81=E2=94=81=E2=94=81 > > xuanmao_001 > > =20 > > From: Kevin Wolf > > Date: 2013-09-06 18:38 > > To: xuanmao_001 > > CC: qemu-discuss; qemu-devel; quintela; stefanha; mreitz > > Subject: Re: savevm too slow > > Am 06.09.2013 um 03:31 hat xuanmao_001 geschrieben: > > > Hi, qemuers: > > > =20 > > > > > > I found that the guest disk file cache mode will affect to the time o= f savevm. > > > =20 > > > > the cache 'writeback' too slow. but the cache 'unsafe' is as fast as i= t can, > > > less than 10 seconds. > > > =20 > > > here is the example I use virsh: > > > @cache with writeback: > > > #the first snapshot > > > real 0m21.904s > > > user 0m0.006s > > > sys 0m0.008s > > > =20 > > > #the secondary snapshot > > > real 2m11.624s > > > user 0m0.013s > > > sys 0m0.008s > > > =20 > > > @cache with unsafe: > > > #the first snapshot > > > real 0m0.730s > > > user 0m0.006s > > > sys 0m0.005s > > > =20 > > > #the secondary snapshot > > > real 0m1.296s > > > user 0m0.002s > > > sys 0m0.008s > > =20 > > I sent patches that should eliminate the difference between the first > > and second snapshot at least. > > =20 > > > so, what the difference between them when using different cache. > > =20 > > cache=3Dunsafe ignores any flush requests. It's possible that there i= s > > potential for optimisation with cache=3Dwriteback, i.e. it sends flus= h > > requests that aren't necessary in fact. This is something that I have= n't > > checked yet. > > =20 > > > the other question: when I change the buffer size #define IO_BUF_SI= ZE 32768 > > > to #define IO_BUF_SIZE (1 * 1024 * 1024), the savevm is more quickl= y. > > =20 > > Is this for cache=3Dunsafe as well? > > =20 > > Juan, any specific reason for using 32k? I think it would be better t= o > > have a multiple of the qcow2 cluster size, otherwise we get COW for t= he > > empty part of newly allocated clusters. If we can't make it dynamic, > > using at least fixed 64k to match the qcow2 default would probably > > improve things a bit. > > =20 > > Kevin