From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48174) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZMCCy-0005z1-MH for qemu-devel@nongnu.org; Mon, 03 Aug 2015 05:39:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZMCCr-00058d-Sk for qemu-devel@nongnu.org; Mon, 03 Aug 2015 05:39:00 -0400 Received: from mx1.redhat.com ([209.132.183.28]:51344) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZMCCr-00058V-Ld for qemu-devel@nongnu.org; Mon, 03 Aug 2015 05:38:53 -0400 Date: Mon, 3 Aug 2015 10:38:49 +0100 From: "Daniel P. Berrange" Message-ID: <20150803093849.GH22485@redhat.com> References: <20150731174542.44862e3a@markmb_rh> <20150803030906.GA13938@ad.nay.redhat.com> <20150803095238.663a7bee@markmb_rh> <20150803082234.GA30561@ad.nay.redhat.com> <20150803110147.55ede584@markmb_rh> <87r3nklo5z.fsf@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <87r3nklo5z.fsf@linaro.org> Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] Modularizing QEMU RFC Reply-To: "Daniel P. Berrange" List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alex =?utf-8?Q?Benn=C3=A9e?= Cc: Marc =?utf-8?B?TWFyw60=?= , Fam Zheng , qemu-devel On Mon, Aug 03, 2015 at 10:24:56AM +0100, Alex Benn=C3=A9e wrote: >=20 > Marc Mar=C3=AD writes: >=20 > > On Mon, 3 Aug 2015 16:22:34 +0800 > > Fam Zheng wrote: > > > >> On Mon, 08/03 09:52, Marc Mar=C3=AD wrote: > >> > So any other ideas to reduce the library overhead are appreciated. > >>=20 > >> It would be interesting to see your profiling on the library loading > >> overhead. For example, how much does it help to reduce the library > >> size, and how much does it help to reduce the # of libraries? > > > > > Some profiling: > > > > A QEMU with this configuration: > > ./configure --enable-sparse --enable-sdl --enable-gtk --enable-vte \ > > --enable-curses --enable-vnc --enable-vnc-{jpeg,tls,sasl,png,ws} \ > > --enable-virtfs --enable-brlapi --enable-curl --enable-fdt \ > > --enable-bluez --enable-kvm --enable-rdma --enable-uuid --enable-vde= \ > > --enable-linux-aio --enable-cap-ng --enable-attr --enable-vhost-net = \ > > --enable-vhost-scsi --enable-spice --enable-rbd --enable-libiscsi \ > > --enable-smartcard-nss --enable-guest-agent --enable-libusb \ > > --enable-usb-redir --enable-lzo --enable-snappy --enable-bzip2 \ > > --enable-seccomp --enable-coroutine-pool --enable-glusterfs \ > > --enable-tpm --enable-libssh2 --enable-vhdx --enable-quorum \ > > --enable-numa --enable-tcmalloc --target-list=3Dx86_64-softmmu > > > > Has dependencies on 142 libraries. It takes 60 ms between the run and > > the jump to the main function, and 80 ms between the run and the > > first kvm_entry. > > > > A QEMU with the same configuration and --enable-modules has > > dependencies on 125 libraries. It takes 20 ms between the run and the > > jump to the main function, and 100 ms between the run and the first > > kvm_entry. > > > > The libraries that are not loaded are: libiscsi, libcurl, librbd, > > librados, ligfapi, libglusterfs, libgfrpc, libgfxdr, libssh2, libcryp= t, > > libidin, libgssapi, liblber, libldap, libboost_thread, libbost_system > > and libatomic_ops. > > > > As I already explained, the current implementation of modules loads > > the modules at startup always. That's why the QEMU setup takes longer= , > > even though it uses G_MODULE_BIND_LAZY. And that's why I was proposin= g > > hotplugging. > > > > I don't know if loading one big library is more efficent than a lot o= f > > small ones, but it would make sense. >=20 > What's the actual use-case here where start-up latency is so important? > If it is an ephemeral cloudy thing then you might just have a base QEMU > with VIRT drivers and one big .so call "the-rest.so"? >=20 > I don't wish to disparage the idea but certainly in emulation world the > difference of 100ms or so is neither here nor there. If you are running a full OS install w/ TCG that 100ms may not be relevan= t, but if you are using QEMU w/ KVM as the basis of a more secure environmen= t for application containers it can be important. eg if it takes 2 secs from point of exec'ing QEMU to running your app, then 100ms is 5% of the total time, which is very relevant to consider optimizing. This is th= e kind of scenario seen by libguestfs and libvirt-sandbox Regards, Daniel --=20 |: http://berrange.com -o- http://www.flickr.com/photos/dberrange= / :| |: http://libvirt.org -o- http://virt-manager.or= g :| |: http://autobuild.org -o- http://search.cpan.org/~danberr= / :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vn= c :|