From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33040) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VIx8e-0001CV-Ct for qemu-devel@nongnu.org; Mon, 09 Sep 2013 04:48:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VIx8W-0008Uz-Ue for qemu-devel@nongnu.org; Mon, 09 Sep 2013 04:48:04 -0400 Date: Mon, 9 Sep 2013 16:47:29 +0800 From: xuanmao_001 References: <2013090609312026502832@163.com> <20130906103837.GH2588@dhcp-200-207.str.redhat.com> <201309090957395153945@163.com>, <20130909083522.GA3110@dhcp-200-207.str.redhat.com> Mime-Version: 1.0 Message-ID: <2013090916472909397914@163.com> Content-Type: multipart/alternative; boundary="----=_001_NextPart783611346351_=----" Subject: Re: [Qemu-devel] savevm too slow Reply-To: xuanmao_001 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: mreitz , quintela , qemu-devel , stefanha , qemu-discuss This is a multi-part message in MIME format. ------=_001_NextPart783611346351_=---- Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 PiBJIHNlbnQgcGF0Y2hlcyB0aGF0IHNob3VsZCBlbGltaW5hdGUgdGhlIGRpZmZlcmVuY2UgYmV0 d2VlbiB0aGUgZmlyc3QNCj4gYW5kIHNlY29uZCBzbmFwc2hvdCBhdCBsZWFzdC4NCg0Kd2hlcmUg SSBjYW4gZmluZCB0aGUgcGF0Y2hlcyB0aGF0IGNhbiBlbGltaW5hdGUgdGhlIGRpZmZlcmVuY2Ug YmV0d2VlbiB0aGUgZmlyc3QNCmFuZCBzZWNvbmQgc25hcHNob3QgPyBEb2VzIHRoZXkgZml0IHFl bXUta3ZtLTEuMCwxID8NCg0KDQoNCg0KeHVhbm1hb18wMDENCg0KRnJvbTogS2V2aW4gV29sZg0K RGF0ZTogMjAxMy0wOS0wOSAxNjozNQ0KVG86IHh1YW5tYW9fMDAxDQpDQzogcWVtdS1kaXNjdXNz OyBxZW11LWRldmVsOyBxdWludGVsYTsgc3RlZmFuaGE7IG1yZWl0eg0KU3ViamVjdDogUmU6IFJl OiBzYXZldm0gdG9vIHNsb3cNCkFtIDA5LjA5LjIwMTMgdW0gMDM6NTcgaGF0IHh1YW5tYW9fMDAx IGdlc2NocmllYmVuOg0KPiA+PiB0aGUgb3RoZXIgcXVlc3Rpb246IHdoZW4gSSBjaGFuZ2UgdGhl IGJ1ZmZlciBzaXplICNkZWZpbmUgSU9fQlVGX1NJWkUgMzI3NjgNCj4gPj4gdG8gI2RlZmluZSBJ T19CVUZfU0laRSAoMSAqIDEwMjQgKiAxMDI0KSwgdGhlIHNhdmV2bSBpcyBtb3JlIHF1aWNrbHku DQo+ICANCj4gPiBJcyB0aGlzIGZvciBjYWNoZT11bnNhZmUgYXMgd2VsbD8NCj4gIA0KPiA+IEp1 YW4sIGFueSBzcGVjaWZpYyByZWFzb24gZm9yIHVzaW5nIDMyaz8gSSB0aGluayBpdCB3b3VsZCBi ZSBiZXR0ZXIgdG8NCj4gPiBoYXZlIGEgbXVsdGlwbGUgb2YgdGhlIHFjb3cyIGNsdXN0ZXIgc2l6 ZSwgb3RoZXJ3aXNlIHdlIGdldCBDT1cgZm9yIHRoZQ0KPiA+IGVtcHR5IHBhcnQgb2YgbmV3bHkg YWxsb2NhdGVkIGNsdXN0ZXJzLiBJZiB3ZSBjYW4ndCBtYWtlIGl0IGR5bmFtaWMsDQo+ID4gdXNp bmcgYXQgbGVhc3QgZml4ZWQgNjRrIHRvIG1hdGNoIHRoZSBxY293MiBkZWZhdWx0IHdvdWxkIHBy b2JhYmx5DQo+ID4gaW1wcm92ZSB0aGluZ3MgYSBiaXQuDQo+ICANCj4gd2l0aCBjYWNoZT13cml0 ZWJhY2suICBJcyB0aGVyZSBhbnkgcmlzayBmb3Igc2V0dGluZyBjYWNoZT13cml0ZWJhY2sgd2l0 aA0KPiBJT19CVUZfU0laRSAxTSA/DQoNCk5vLiBVc2luZyBhIGxhcmdlciBidWZmZXIgc2l6ZSBz aG91bGQgYmUgc2FmZS4NCg0KS2V2aW4NCg0KPiDilIHilIHilIHilIHilIHilIHilIHilIHilIHi lIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHi lIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHi lIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHi lIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIHilIENCj4geHVhbm1hb18wMDENCj4g IA0KPiBGcm9tOiBLZXZpbiBXb2xmDQo+IERhdGU6IDIwMTMtMDktMDYgMTg6MzgNCj4gVG86IHh1 YW5tYW9fMDAxDQo+IENDOiBxZW11LWRpc2N1c3M7IHFlbXUtZGV2ZWw7IHF1aW50ZWxhOyBzdGVm YW5oYTsgbXJlaXR6DQo+IFN1YmplY3Q6IFJlOiBzYXZldm0gdG9vIHNsb3cNCj4gQW0gMDYuMDku MjAxMyB1bSAwMzozMSBoYXQgeHVhbm1hb18wMDEgZ2VzY2hyaWViZW46DQo+ID4gSGksIHFlbXVl cnM6DQo+ID4gIA0KPiA+DQo+ICBJIGZvdW5kIHRoYXQgdGhlIGd1ZXN0IGRpc2sgZmlsZSBjYWNo ZSBtb2RlIHdpbGwgYWZmZWN0IHRvIHRoZSB0aW1lIG9mIHNhdmV2bS4NCj4gPiAgDQo+ID4gdGhl IGNhY2hlICd3cml0ZWJhY2snIHRvbyBzbG93LiBidXQgdGhlIGNhY2hlICd1bnNhZmUnIGlzIGFz IGZhc3QgYXMgaXQgY2FuLA0KPiA+IGxlc3MgdGhhbiAxMCBzZWNvbmRzLg0KPiA+ICANCj4gPiBo ZXJlIGlzIHRoZSBleGFtcGxlIEkgdXNlIHZpcnNoOg0KPiA+IEBjYWNoZSB3aXRoIHdyaXRlYmFj azoNCj4gPiAjdGhlIGZpcnN0IHNuYXBzaG90DQo+ID4gcmVhbCAgICAwbTIxLjkwNHMNCj4gPiB1 c2VyICAgIDBtMC4wMDZzDQo+ID4gc3lzICAgICAwbTAuMDA4cw0KPiA+ICANCj4gPiAjdGhlIHNl Y29uZGFyeSBzbmFwc2hvdA0KPiA+IHJlYWwgICAgMm0xMS42MjRzDQo+ID4gdXNlciAgICAwbTAu MDEzcw0KPiA+IHN5cyAgICAgMG0wLjAwOHMNCj4gPiAgDQo+ID4gQGNhY2hlIHdpdGggdW5zYWZl Og0KPiA+ICN0aGUgZmlyc3Qgc25hcHNob3QNCj4gPiByZWFsICAgIDBtMC43MzBzDQo+ID4gdXNl ciAgICAwbTAuMDA2cw0KPiA+IHN5cyAgICAgMG0wLjAwNXMNCj4gPiAgDQo+ID4gI3RoZSBzZWNv bmRhcnkgc25hcHNob3QNCj4gPiByZWFsICAgIDBtMS4yOTZzDQo+ID4gdXNlciAgICAwbTAuMDAy cw0KPiA+IHN5cyAgICAgMG0wLjAwOHMNCj4gIA0KPiBJIHNlbnQgcGF0Y2hlcyB0aGF0IHNob3Vs ZCBlbGltaW5hdGUgdGhlIGRpZmZlcmVuY2UgYmV0d2VlbiB0aGUgZmlyc3QNCj4gYW5kIHNlY29u ZCBzbmFwc2hvdCBhdCBsZWFzdC4NCj4gIA0KPiA+IHNvLCB3aGF0IHRoZSBkaWZmZXJlbmNlIGJl dHdlZW4gdGhlbSB3aGVuIHVzaW5nIGRpZmZlcmVudCBjYWNoZS4NCj4gIA0KPiBjYWNoZT11bnNh ZmUgaWdub3JlcyBhbnkgZmx1c2ggcmVxdWVzdHMuIEl0J3MgcG9zc2libGUgdGhhdCB0aGVyZSBp cw0KPiBwb3RlbnRpYWwgZm9yIG9wdGltaXNhdGlvbiB3aXRoIGNhY2hlPXdyaXRlYmFjaywgaS5l LiBpdCBzZW5kcyBmbHVzaA0KPiByZXF1ZXN0cyB0aGF0IGFyZW4ndCBuZWNlc3NhcnkgaW4gZmFj dC4gVGhpcyBpcyBzb21ldGhpbmcgdGhhdCBJIGhhdmVuJ3QNCj4gY2hlY2tlZCB5ZXQuDQo+ICAN Cj4gPiB0aGUgb3RoZXIgcXVlc3Rpb246IHdoZW4gSSBjaGFuZ2UgdGhlIGJ1ZmZlciBzaXplICNk ZWZpbmUgSU9fQlVGX1NJWkUgMzI3NjgNCj4gPiB0byAjZGVmaW5lIElPX0JVRl9TSVpFICgxICog MTAyNCAqIDEwMjQpLCB0aGUgc2F2ZXZtIGlzIG1vcmUgcXVpY2tseS4NCj4gIA0KPiBJcyB0aGlz IGZvciBjYWNoZT11bnNhZmUgYXMgd2VsbD8NCj4gIA0KPiBKdWFuLCBhbnkgc3BlY2lmaWMgcmVh c29uIGZvciB1c2luZyAzMms/IEkgdGhpbmsgaXQgd291bGQgYmUgYmV0dGVyIHRvDQo+IGhhdmUg YSBtdWx0aXBsZSBvZiB0aGUgcWNvdzIgY2x1c3RlciBzaXplLCBvdGhlcndpc2Ugd2UgZ2V0IENP VyBmb3IgdGhlDQo+IGVtcHR5IHBhcnQgb2YgbmV3bHkgYWxsb2NhdGVkIGNsdXN0ZXJzLiBJZiB3 ZSBjYW4ndCBtYWtlIGl0IGR5bmFtaWMsDQo+IHVzaW5nIGF0IGxlYXN0IGZpeGVkIDY0ayB0byBt YXRjaCB0aGUgcWNvdzIgZGVmYXVsdCB3b3VsZCBwcm9iYWJseQ0KPiBpbXByb3ZlIHRoaW5ncyBh IGJpdC4NCj4gIA0KPiBLZXZpbg== ------=_001_NextPart783611346351_=---- Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: quoted-printable =EF=BB=BF
>=20 I sent patches that should eliminate the&nbs= p;difference between the first
> and second snapshot at least.
 
where I can find the patches that can=20 eliminate the difference between the first
and second snapshot ? Does they fit qemu-kvm-1.0,1=20 ?
 

xuanmao_001
 
From: Kevin Wolf<= /DIV>
Date: 2013-09-09 16:35
CC: qemu-discu= ss;=20 qemu-devel; quintela; stefanha; mreitz
Subject: Re: Re: savevm too slow
Am 09.09.2013 um 03:57 hat xuanmao_001 = geschrieben:
> >> the other question: when = I change the buffer size #define IO_BUF_SIZE=  32768
> >> to #define IO_BUF_SIZE (1 = ;* 1024 * 1024), the savevm is more&nbs= p;quickly.
>  
> > Is this for cache=3Dunsafe as=  well?
>  
> > Juan, any specific reason for=  using 32k? I think it would be be= tter to
> > have a multiple of the q= cow2 cluster size, otherwise we get COW = ;for the
> > empty part of newly allocated=  clusters. If we can't make it dynamic,=
> > using at least fixed 64k = ;to match the qcow2 default would probably
> > improve things a bit.
>  
> with cache=3Dwriteback.  Is there = any risk for setting cache=3Dwriteback with
> IO_BUF_SIZE 1M ?
 
No. Using a larger buffer size should&n= bsp;be safe.
 
Kevin
 
> =E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2=94=81=E2= =94=81
> xuanmao_001
>  
> From: Kevin Wolf
> Date: 2013-09-06 18:38
> To: xuanmao_001
> CC: qemu-discuss; qemu-devel; quintela; = ;stefanha; mreitz
> Subject: Re: savevm too slow
> Am 06.09.2013 um 03:31 hat xuanmao= _001 geschrieben:
> > Hi, qemuers:
> >  
> >
>  I found that the guest disk&= nbsp;file cache mode will affect to the = ;time of savevm.
> >  
> > the cache 'writeback' too slo= w. but the cache 'unsafe' is as fast&nb= sp;as it can,
> > less than 10 seconds.
> >  
> > here is the example I us= e virsh:
> > @cache with writeback:
> > #the first snapshot
> > real    0m21.904s
> > user    0m0.006s
> > sys     0m0.008s
> >  
> > #the secondary snapshot
> > real    2m11.624s
> > user    0m0.013s
> > sys     0m0.008s
> >  
> > @cache with unsafe:
> > #the first snapshot
> > real    0m0.730s
> > user    0m0.006s
> > sys     0m0.005s
> >  
> > #the secondary snapshot
> > real    0m1.296s
> > user    0m0.002s
> > sys     0m0.008s
>  
> I sent patches that should elimina= te the difference between the first
> and second snapshot at least.
>  
> > so, what the difference betwe= en them when using different cache.
>  
> cache=3Dunsafe ignores any flush reques= ts. It's possible that there is
> potential for optimisation with cache= =3Dwriteback, i.e. it sends flush
> requests that aren't necessary in = fact. This is something that I haven't
> checked yet.
>  
> > the other question: when I&nb= sp;change the buffer size #define IO_BUF_SIZE&nbs= p;32768
> > to #define IO_BUF_SIZE (1 *&n= bsp;1024 * 1024), the savevm is more qu= ickly.
>  
> Is this for cache=3Dunsafe as well= ?
>  
> Juan, any specific reason for usin= g 32k? I think it would be better = to
> have a multiple of the qcow2 = cluster size, otherwise we get COW for = the
> empty part of newly allocated clus= ters. If we can't make it dynamic,
> using at least fixed 64k to m= atch the qcow2 default would probably
> improve things a bit.
>  
> Kevin
------=_001_NextPart783611346351_=------