From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56069) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1W0vvJ-0001Uz-JS for qemu-devel@nongnu.org; Wed, 08 Jan 2014 11:24:10 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1W0vvC-0006UM-R6 for qemu-devel@nongnu.org; Wed, 08 Jan 2014 11:24:05 -0500 Received: from mx1.redhat.com ([209.132.183.28]:16058) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1W0vvC-0006UG-IZ for qemu-devel@nongnu.org; Wed, 08 Jan 2014 11:23:58 -0500 Date: Wed, 8 Jan 2014 21:53:02 +0530 From: Amit Shah Message-ID: <20140108162302.GA14396@grmbl.mre> References: <1386598213-8156-1-git-send-email-akong@redhat.com> <1386598213-8156-2-git-send-email-akong@redhat.com> <87txe7zzop.fsf@blackfin.pond.sub.org> <87d2ku325h.fsf@redhat.com> <20140108091441.GA19507@amosk.info> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140108091441.GA19507@amosk.info> Subject: Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performance List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Amos Kong Cc: Giuseppe Scrivano , varadgautam@gmail.com, Markus Armbruster , anthony@codemonkey.ws, qemu-devel@nongnu.org On (Wed) 08 Jan 2014 [17:14:41], Amos Kong wrote: > On Wed, Dec 18, 2013 at 11:05:14AM +0100, Giuseppe Scrivano wrote: > > Markus Armbruster writes: > > > > > Amos Kong writes: > > > > > >> Bugzilla: https://bugs.launchpad.net/qemu/+bug/1253563 > > >> > > >> We have a requests queue to cache the random data, but the second > > >> will come in when the first request is returned, so we always > > >> only have one items in the queue. It effects the performance. > > >> > > >> This patch changes the IOthread to fill a fixed buffer with > > >> random data from egd socket, request_entropy() will return > > >> data to virtio queue if buffer has available data. > > >> > > >> (test with a fast source, disguised egd socket) > > >> # cat /dev/urandom | nc -l localhost 8003 > > >> # qemu .. -chardev socket,host=localhost,port=8003,id=chr0 \ > > >> -object rng-egd,chardev=chr0,id=rng0,buf_size=1024 \ > > >> -device virtio-rng-pci,rng=rng0 > > >> > > >> bytes kb/s > > >> ------ ---- > > >> 131072 -> 835 > > >> 65536 -> 652 > > >> 32768 -> 356 > > >> 16384 -> 182 > > >> 8192 -> 99 > > >> 4096 -> 52 > > >> 2048 -> 30 > > >> 1024 -> 15 > > >> 512 -> 8 > > >> 256 -> 4 > > >> 128 -> 3 > > >> 64 -> 2 > > > > > > I'm not familiar with the rng-egd code, but perhaps my question has > > > value anyway: could agressive reading ahead on a source of randomness > > > cause trouble by depleting the source? > > > > > > Consider a server restarting a few dozen guests after reboot, where each > > > guest's QEMU then tries to slurp in a couple of KiB of randomness. How > > > does this behave? > > Hi Giuseppe, > > > I hit this performance problem while I was working on RNG devices > > support in virt-manager and I also noticed that the bottleneck is in the > > egd backend that slowly response to requests. > > o Current situation: > rng-random backend reads data from non-blocking character devices > New entropy request will be sent from guest when last request is processed, > so the request queue can only cache one request. > Almost all the request size is 64 bytes. > Egd socket responses the request slowly. > > o Solution 1: pre-reading, perf is improved, but cost much memory > In my V1 patch, I tried to add a configurable buffer to pre-read data > from egd socket. The performance was improved but it used a big memory > as the buffer. I really dislike buffering random numbers or entropy from the host, let's rule these options out. > o Solution 2: pre-sending request to egd socket, improve is trivial > I did another test, we just pre-send entropy request to egd socket, not > really read the data to a buffer. > > o Solution 3: eyeless poll, not good > Always returns an integer in rng_egd_chr_can_read(), the perf can be > improved to 120 kB/s, it reduce the delay caused by poll mechanism. > > o Solution 4: > Try to use the new message type to improve the response speed of egd socket > > o Solution 5: > non-block read? I'd just say let the "problem" be. I don't really get the point of egd. The egd backend was something Anthony wanted, but I can't remember if there has been enough justification for it. Certainly the protocol isn't documented, and not using the backend doesn't give us drawbacks. Moreover, reasonable guests won't request for a whole lot of random numbers in a short interval, so the theoretical performance problem we're seeing is just going to remain theoretical for well-behaved guests. We have enough documentation by now about this issue, I say let's just drop this patch and worry about this only if there's a proven need to better things here. Amit