qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Amos Kong <akong@redhat.com>
To: Amit Shah <amit.shah@redhat.com>
Cc: Giuseppe Scrivano <gscrivan@redhat.com>,
	varadgautam@gmail.com, Markus Armbruster <armbru@redhat.com>,
	anthony@codemonkey.ws, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performance
Date: Fri, 10 Jan 2014 10:30:07 +0800	[thread overview]
Message-ID: <20140110023007.GA3385@amosk.info> (raw)
In-Reply-To: <20140108162302.GA14396@grmbl.mre>

On Wed, Jan 08, 2014 at 09:53:02PM +0530, Amit Shah wrote:
> On (Wed) 08 Jan 2014 [17:14:41], Amos Kong wrote:
> > On Wed, Dec 18, 2013 at 11:05:14AM +0100, Giuseppe Scrivano wrote:
> > > Markus Armbruster <armbru@redhat.com> writes:
> > > 
> > > > Amos Kong <akong@redhat.com> writes:
> > > >
> > > >> Bugzilla: https://bugs.launchpad.net/qemu/+bug/1253563
> > > >>
> > > >> We have a requests queue to cache the random data, but the second
> > > >> will come in when the first request is returned, so we always
> > > >> only have one items in the queue. It effects the performance.
> > > >>
> > > >> This patch changes the IOthread to fill a fixed buffer with
> > > >> random data from egd socket, request_entropy() will return
> > > >> data to virtio queue if buffer has available data.
> > > >>
> > > >> (test with a fast source, disguised egd socket)
> > > >>  # cat /dev/urandom | nc -l localhost 8003
> > > >>  # qemu .. -chardev socket,host=localhost,port=8003,id=chr0 \
> > > >>         -object rng-egd,chardev=chr0,id=rng0,buf_size=1024 \
> > > >>         -device virtio-rng-pci,rng=rng0
> > > >>
> > > >>   bytes     kb/s
> > > >>   ------    ----
> > > >>   131072 ->  835
> > > >>    65536 ->  652
> > > >>    32768 ->  356
> > > >>    16384 ->  182
> > > >>     8192 ->   99
> > > >>     4096 ->   52
> > > >>     2048 ->   30
> > > >>     1024 ->   15
> > > >>      512 ->    8
> > > >>      256 ->    4
> > > >>      128 ->    3
> > > >>       64 ->    2
> > > >
> > > > I'm not familiar with the rng-egd code, but perhaps my question has
> > > > value anyway: could agressive reading ahead on a source of randomness
> > > > cause trouble by depleting the source?
> > > >
> > > > Consider a server restarting a few dozen guests after reboot, where each
> > > > guest's QEMU then tries to slurp in a couple of KiB of randomness.  How
> > > > does this behave?
> > 
> > Hi Giuseppe,
> >  
> > > I hit this performance problem while I was working on RNG devices
> > > support in virt-manager and I also noticed that the bottleneck is in the
> > > egd backend that slowly response to requests.
> > 
> > o Current situation:
> >   rng-random backend reads data from non-blocking character devices
> >   New entropy request will be sent from guest when last request is processed,
> >   so the request queue can only cache one request.
> >   Almost all the request size is 64 bytes.
> >   Egd socket responses the request slowly.
> > 
> > o Solution 1: pre-reading, perf is improved, but cost much memory 
> >   In my V1 patch, I tried to add a configurable buffer to pre-read data
> >   from egd socket. The performance was improved but it used a big memory
> >   as the buffer.
> 
> I really dislike buffering random numbers or entropy from the host,
> let's rule these options out.

Agree.
The main reason is the slow source, using buffers and pre-reading is
just abuse the resource & sync read API in qemu.
 
> > o Solution 2: pre-sending request to egd socket, improve is trivial
> >   I did another test, we just pre-send entropy request to egd socket, not
> >   really read the data to a buffer.
> > 
> > o Solution 3: eyeless poll, not good
> >   Always returns an integer in rng_egd_chr_can_read(), the perf can be 
> >   improved to 120 kB/s, it reduce the delay caused by poll mechanism.
> > 
> > o Solution 4:
> >   Try to use the new message type to improve the response speed of egd socket
> > 
> > o Solution 5:
> >   non-block read?
> 
> I'd just say let the "problem" be.  I don't really get the point of
> egd.  The egd backend was something Anthony wanted, but I can't
> remember if there has been enough justification for it.  Certainly the
> protocol isn't documented, and not using the backend doesn't give us
> drawbacks.
 
http://miketeo.net/wp/index.php/2009/06/09/egd-entropy-gathering-daemon-client-protocol.html

> Moreover, reasonable guests won't request for a whole lot of random
> numbers in a short interval, so the theoretical performance problem
> we're seeing is just going to remain theoretical for well-behaved
> guests.
> 
> We have enough documentation by now about this issue, I say let's just
> drop this patch and worry about this only if there's a proven need to
> better things here.

We always recommend the users to use rng-random backend. Only use
rng-egd backend when host uses a USB entropy device.

Let's see if there exists some problem when the rng backend speed is
about 5 kB/s.
 
> 		Amit

-- 
			Amos.

  reply	other threads:[~2014-01-10  2:30 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-09 14:10 [Qemu-devel] [PATCH RFC 0/2] improve rng-egd perf Amos Kong
2013-12-09 14:10 ` [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performance Amos Kong
2013-12-16 16:36   ` Amit Shah
2013-12-16 23:19     ` Anthony Liguori
2013-12-17  5:52       ` Amit Shah
2013-12-17  7:03         ` Amos Kong
2013-12-17  7:47   ` Markus Armbruster
2013-12-17 10:32     ` Amit Shah
2013-12-18 10:05     ` Giuseppe Scrivano
2013-12-24  9:58       ` Varad Gautam
2014-01-08  9:14       ` Amos Kong
2014-01-08 16:23         ` Amit Shah
2014-01-10  2:30           ` Amos Kong [this message]
2013-12-09 14:10 ` [Qemu-devel] [PATCH RFC 2/2] rng-egd: introduce a parameter to set buffer size Amos Kong
2013-12-10 16:58   ` Eric Blake
2013-12-12  2:55     ` Amos Kong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140110023007.GA3385@amosk.info \
    --to=akong@redhat.com \
    --cc=amit.shah@redhat.com \
    --cc=anthony@codemonkey.ws \
    --cc=armbru@redhat.com \
    --cc=gscrivan@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=varadgautam@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).