qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Amos Kong <akong@redhat.com>
To: Giuseppe Scrivano <gscrivan@redhat.com>
Cc: amit.shah@redhat.com, varadgautam@gmail.com,
	Markus Armbruster <armbru@redhat.com>,
	anthony@codemonkey.ws, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performance
Date: Wed, 8 Jan 2014 17:14:41 +0800	[thread overview]
Message-ID: <20140108091441.GA19507@amosk.info> (raw)
In-Reply-To: <87d2ku325h.fsf@redhat.com>

On Wed, Dec 18, 2013 at 11:05:14AM +0100, Giuseppe Scrivano wrote:
> Markus Armbruster <armbru@redhat.com> writes:
> 
> > Amos Kong <akong@redhat.com> writes:
> >
> >> Bugzilla: https://bugs.launchpad.net/qemu/+bug/1253563
> >>
> >> We have a requests queue to cache the random data, but the second
> >> will come in when the first request is returned, so we always
> >> only have one items in the queue. It effects the performance.
> >>
> >> This patch changes the IOthread to fill a fixed buffer with
> >> random data from egd socket, request_entropy() will return
> >> data to virtio queue if buffer has available data.
> >>
> >> (test with a fast source, disguised egd socket)
> >>  # cat /dev/urandom | nc -l localhost 8003
> >>  # qemu .. -chardev socket,host=localhost,port=8003,id=chr0 \
> >>         -object rng-egd,chardev=chr0,id=rng0,buf_size=1024 \
> >>         -device virtio-rng-pci,rng=rng0
> >>
> >>   bytes     kb/s
> >>   ------    ----
> >>   131072 ->  835
> >>    65536 ->  652
> >>    32768 ->  356
> >>    16384 ->  182
> >>     8192 ->   99
> >>     4096 ->   52
> >>     2048 ->   30
> >>     1024 ->   15
> >>      512 ->    8
> >>      256 ->    4
> >>      128 ->    3
> >>       64 ->    2
> >
> > I'm not familiar with the rng-egd code, but perhaps my question has
> > value anyway: could agressive reading ahead on a source of randomness
> > cause trouble by depleting the source?
> >
> > Consider a server restarting a few dozen guests after reboot, where each
> > guest's QEMU then tries to slurp in a couple of KiB of randomness.  How
> > does this behave?

Hi Giuseppe,
 
> I hit this performance problem while I was working on RNG devices
> support in virt-manager and I also noticed that the bottleneck is in the
> egd backend that slowly response to requests.

o Current situation:
  rng-random backend reads data from non-blocking character devices
  New entropy request will be sent from guest when last request is processed,
  so the request queue can only cache one request.
  Almost all the request size is 64 bytes.
  Egd socket responses the request slowly.

o Solution 1: pre-reading, perf is improved, but cost much memory 
  In my V1 patch, I tried to add a configurable buffer to pre-read data
  from egd socket. The performance was improved but it used a big memory
  as the buffer.

o Solution 2: pre-sending request to egd socket, improve is trivial
  I did another test, we just pre-send entropy request to egd socket, not
  really read the data to a buffer.

o Solution 3: eyeless poll, not good
  Always returns an integer in rng_egd_chr_can_read(), the perf can be 
  improved to 120 kB/s, it reduce the delay caused by poll mechanism.

o Solution 4:
  Try to use the new message type to improve the response speed of egd socket

o Solution 5:
  non-block read?

> I thought as well about
> adding a buffer but to handle it trough a new message type in the EGD
> protocol.  The new message type informs the EGD daemon of the buffer
> size and that the buffer data has a lower priority that the daemon

lower priority or higher priority? we need the daemon respons our request quickly.

> should fill when there are no other queued requests.  Could such
> approach solve the scenario you've described?

I will try. Do you know the name of new message type? can you show me
an example?

QEMU code:
  uint8_t header[2];
  header[0] = 0x02;  /* 0x01: returns len + data, 0x02: only returns data*/
  header[1] = len;
  qemu_chr_fe_write(s->chr, header, sizeof(header));
 
> Cheers,
> Giuseppe

-- 
			Amos.

  parent reply	other threads:[~2014-01-08  9:15 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-09 14:10 [Qemu-devel] [PATCH RFC 0/2] improve rng-egd perf Amos Kong
2013-12-09 14:10 ` [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performance Amos Kong
2013-12-16 16:36   ` Amit Shah
2013-12-16 23:19     ` Anthony Liguori
2013-12-17  5:52       ` Amit Shah
2013-12-17  7:03         ` Amos Kong
2013-12-17  7:47   ` Markus Armbruster
2013-12-17 10:32     ` Amit Shah
2013-12-18 10:05     ` Giuseppe Scrivano
2013-12-24  9:58       ` Varad Gautam
2014-01-08  9:14       ` Amos Kong [this message]
2014-01-08 16:23         ` Amit Shah
2014-01-10  2:30           ` Amos Kong
2013-12-09 14:10 ` [Qemu-devel] [PATCH RFC 2/2] rng-egd: introduce a parameter to set buffer size Amos Kong
2013-12-10 16:58   ` Eric Blake
2013-12-12  2:55     ` Amos Kong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140108091441.GA19507@amosk.info \
    --to=akong@redhat.com \
    --cc=amit.shah@redhat.com \
    --cc=anthony@codemonkey.ws \
    --cc=armbru@redhat.com \
    --cc=gscrivan@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=varadgautam@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).