From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38147) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZEYan-0004NW-Qo for qemu-devel@nongnu.org; Mon, 13 Jul 2015 03:56:03 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZEYak-00025j-Iw for qemu-devel@nongnu.org; Mon, 13 Jul 2015 03:56:01 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60823) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZEYak-00025c-9M for qemu-devel@nongnu.org; Mon, 13 Jul 2015 03:55:58 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (Postfix) with ESMTPS id B86DBB0CE3 for ; Mon, 13 Jul 2015 07:55:57 +0000 (UTC) Date: Mon, 13 Jul 2015 10:55:55 +0300 From: "Michael S. Tsirkin" Message-ID: <20150713075555.GA12377@redhat.com> References: <1436520840-28742-1-git-send-email-pagupta@redhat.com> <20150713060938.GA17241@grmbl.mre> <2021183425.31524217.1436770416516.JavaMail.zimbra@redhat.com> <20150713073445.GB17241@grmbl.mre> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20150713073445.GB17241@grmbl.mre> Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [PATCH] virtio-rng: Bump up quota value only when guest requests entropy List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Amit Shah Cc: Pankaj Gupta , qemu-devel@nongnu.org On Mon, Jul 13, 2015 at 01:04:45PM +0530, Amit Shah wrote: > On (Mon) 13 Jul 2015 [02:53:36], Pankaj Gupta wrote: > > Hi Amit, > >=20 > > Thanks for the review. > >=20 > > >=20 > > > On (Fri) 10 Jul 2015 [15:04:00], Pankaj Gupta wrote: > > > > =A0 =A0Timer was added in virtio-rng to rate limit the > > > > entropy. It used to trigger at regular intervals to > > > > bump up the quota value. The value of quota and timer > > > > slice is decided based on entropy source rate in host. > > >=20 > > > It doesn't necessarily depnd on the source rate in the host - all w= e > > > want the quota+timer to do is to limit the amount of data a guest c= an > > > take from the host - to ensure one (potentially rogue) guest does n= ot > > > use up all the entropy from the host. > >=20 > > Sorry! for not being clear on this. By rate limit I meant same. > > I used a broader term. >=20 > My comment was to the 'value of quota and timer slice is decided based > on entropy source rate in host' - admins will usually not decide based > on what sources the host has - they will typically decide how much a > guest is supposed to consume. >=20 > > > > This resulted in triggring of timer even when quota > > > > is not exhausted at all and resulting in extra processing. > > > >=20 > > > > This patch triggers timer only when guest requests for > > > > entropy. As soon as first request from guest for entropy > > > > comes we set the timer. Timer bumps up the quota value > > > > when it gets triggered. > > >=20 > > > Can you say how you tested this? > > >=20 > > > Mainly interested in seeing the results in these cases: > > >=20 > > > * No quota/timer specified on command line > > =A0=A0 =A0Tested this scenario. I am setting timer when first request= comes. > > =A0=A0 =A0So, timer gets fired after (1 << 16) ms time.=20 >=20 > But in case a quota or a timer isn't specified, the timer shouldn't be > armed in the first place. >=20 > > > * Quota+timer specified on command line, and guest keeps asking hos= t > > > =A0 for unlimited entropy, e.g. by doing 'dd if=3D/dev/hwrng of=3D/= dev/null' > > > =A0 in the guest. > >=20 > > =A0=A0 =A0I did not do =A0'dd if=3D/dev/hwrng of=3D/dev/null'. > > =A0=A0 =A0Did cat '/dev/hwrng' && '/dev/random' >=20 > OK - that's similar. I like dd because when dd is stopped, it gives > the rate at which data was received, so it helps in seeing if we're > getting more rate than what was specified on the cmdline. >=20 > > > * Ensure quota restrictions are maintained, and we're not giving mo= re > > > =A0 data than configured. > > =A0=A0 =A0Ensured. We are either giving < =3D requested data > > >=20 > > > For these tests, it's helpful to use the host's /dev/urandom as the > > > source, since that can give data faster to the guest than the defau= lt > > > /dev/random. =A0(Otherwise, if the host itself blocks on /dev/rando= m, > > > the guest may not get entropy due to that reason vs it not getting > > > entropy due to rate-limiting.) > >=20 > > =A0=A0Agree. > > =A0=A0Will test this as well. > >=20 > > >=20 > > > I tested one scenario using the trace events. =A0With some quota an= d a > > > timer value specified on the cmdline, before patch, I get tons of > > > trace events before the guest is even up. =A0After applying the pat= ch, I > > > don't get any trace events. =A0So that's progress! > >=20 > > Thanks. > > >=20 > > > I have one question: > > >=20 > > > > Signed-off-by: Pankaj Gupta > > > > --- > > > > =A0hw/virtio/virtio-rng.c =A0 =A0 =A0 =A0 | 15 ++++++++------- > > > > =A0include/hw/virtio/virtio-rng.h | =A01 + > > > > =A02 files changed, 9 insertions(+), 7 deletions(-) > > > >=20 > > > > diff --git a/hw/virtio/virtio-rng.c b/hw/virtio/virtio-rng.c > > > > index 22b1d87..8774a0c 100644 > > > > --- a/hw/virtio/virtio-rng.c > > > > +++ b/hw/virtio/virtio-rng.c > > > > @@ -78,6 +78,12 @@ static void virtio_rng_process(VirtIORNG *vrng= ) > > > > =A0 =A0 =A0 =A0 =A0return; > > > > =A0 =A0 =A0} > > > > =A0 > > > > + =A0 =A0if (vrng->activate_timer) { > > > > + =A0 =A0 =A0 =A0timer_mod(vrng->rate_limit_timer, > > > > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 qemu_clock_get_ms(QEMU_CLOC= K_VIRTUAL) + > > > > vrng->conf.period_ms); > > > > + =A0 =A0 =A0 =A0vrng->activate_timer =3D false; > > > > + =A0 =A0} > > > > + > > > > =A0 =A0 =A0if (vrng->quota_remaining < 0) { > > > > =A0 =A0 =A0 =A0 =A0quota =3D 0; > > > > =A0 =A0 =A0} else { > > > > @@ -139,8 +145,7 @@ static void check_rate_limit(void *opaque) > > > > =A0 > > > > =A0 =A0 =A0vrng->quota_remaining =3D vrng->conf.max_bytes; > > > > =A0 =A0 =A0virtio_rng_process(vrng); > > > > - =A0 =A0timer_mod(vrng->rate_limit_timer, > > > > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 qemu_clock_get_ms(QEMU_CLOC= K_VIRTUAL) + > > > > vrng->conf.period_ms); > > > > + =A0 =A0vrng->activate_timer =3D true; > > > > =A0} > > >=20 > > > We're processing an older request first, and then firing the timer. > > > What's the use of doing it this way? =A0Why even do this? > >=20 > > I also had this query. If we don't call this after resetting 'vrng->q= uota_remaining'=20 > > further requests does not work. It looks to me some limitation in ear= lier code when=20 > > 'vrng->quota_remaining' goes to < =3D 0. A self request is needed to = reset things. > >=20 > > I will try to find the answer. >=20 > OK so I actually read through the thing, and this is useful for such a > scenario: >=20 > assume our rate-limit is at 4KB/s. >=20 > * guest queues up multiple requests, say 4KB, 8KB, 12KB. > * we will serve the first request in the queue, which is 4KB. > * then, check_rate_limit() is triggered, and we serve the 2nd request. > * since the 2nd request is for 8KB, but we can only give 4KB in 1 > second, we only give the guest 4KB. We then re-arm the timer so > that we can get to the next request in the list. Without this > re-arming, the already-queued request will not get attention (till > the next time the guest writes something to the queue. > * we then serve the 3rd request for 12KB, again with 4KB of data. >=20 > One thing to observe here is that we just service the minimum data we > can without waiting to service the entire request (i.e. we give the > guest 4KB of data, and not 8 or 12 KB. We could do that by waiting > for the timer to expire, and then servicing the entire request. > This current way is simpler, and better. If the guest isn't happy to > receive just 4KB when it asked for 12, it can ask again. >=20 > That also saves us the trouble with live migration: if we hold onto a > buffer, that becomes state, and we'll have to migrate it as well. > This current model saves us from doing that. Timers hold state and have to be migrated, this seems to be missing here. > And now that I type all this, I recall having thought about these > things initially. Don't know if I wrote it up somewhere, or if there > are email conversations on this subject... >=20 > So now with your changes, here is what we can do: instead of just > activating the timer in check_rate_limit(), we can issue a > get_request_size() call. If the return value is > 0, it means we have > a buffer queued up by the guest, and we can then call > virtio_rng_process() and set activated to true. Else, no need to call > virtio_rng_process at all, and the job for the timer is done, since > there are no more outstanding requests from the guest. >=20 > Makes sense? >=20 > Amit