From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47334) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZX4rt-0002Wj-3z for qemu-devel@nongnu.org; Wed, 02 Sep 2015 06:02:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZX4rp-0004cV-3A for qemu-devel@nongnu.org; Wed, 02 Sep 2015 06:02:13 -0400 Date: Wed, 2 Sep 2015 15:32:01 +0530 From: Amit Shah Message-ID: <20150902100201.GG13778@grmbl.mre> References: <1441046762-5788-1-git-send-email-thuth@redhat.com> <1441046762-5788-3-git-send-email-thuth@redhat.com> <20150902053412.GE13778@grmbl.mre> <20150902074801.GA6537@voom.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150902074801.GA6537@voom.redhat.com> Subject: Re: [Qemu-devel] [PATCH v2 2/2] ppc/spapr_hcall: Implement H_RANDOM hypercall in QEMU List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: David Gibson Cc: Thomas Huth , qemu-devel@nongnu.org, armbru@redhat.com, agraf@suse.de, michael@ellerman.id.au, qemu-ppc@nongnu.org On (Wed) 02 Sep 2015 [17:48:01], David Gibson wrote: > On Wed, Sep 02, 2015 at 11:04:12AM +0530, Amit Shah wrote: > > On (Mon) 31 Aug 2015 [20:46:02], Thomas Huth wrote: > > > The PAPR interface provides a hypercall to pass high-quality > > > hardware generated random numbers to guests. So let's provide > > > this call in QEMU, too, so that guests that do not support > > > virtio-rnd yet can get good random numbers, too. > > > > virtio-rng, not rnd. > > > > Can you elaborate what you mean by 'guests that do not support > > virtio-rng yet'? The Linux kernel has had the virtio-rng driver since > > 2.6.26, so I'm assuming that's not the thing you're alluding to. > > > > Not saying this hypercall isn't a good idea, just asking why. I think > > there's are valid reasons like the driver fails to load, or the driver > > is compiled out, or simply is loaded too late in the boot cycle. > > Yeah, I think we'd be talking about guests that just don't have it > configured, although I suppose it's possible someone out there is > using something earlier than 2.6.26 as well. Note that H_RANDOM has > been supported under PowerVM for a long time, and PowerVM doesn't have > any virtio support. So it is plausible that there are guests out > there with with H_RANDOM support but no virtio-rng support, although I > don't know of any examples specifically. RHEL6 had virtio support, > including virtio-rng more or less by accident (since it was only > supported under PowerVM). SLES may not have made the same fortunate > error - I don't have a system handy to check. RHEL6 also used 2.6.32, which means it inherited from upstream. But you're right that x86 didn't have a device for virtio-rng then. > > > Please note that this hypercall should provide "good" random data > > > instead of pseudo-random, so the function uses the RngBackend to > > > retrieve the values instead of using a "simple" library function > > > like rand() or g_random_int(). Since there are multiple RngBackends > > > available, the user must select an appropriate backend via the > > > "h-random" property of the the machine state to enable it, e.g. > > > > > > qemu-system-ppc64 -M pseries,h-random=rng-random ... > > > > > > to use the /dev/random backend, or "h-random=rng-egd" to use the > > > Entropy Gathering Daemon instead. > > > > I was going to suggest using -object here, but already I see you and > > David have reached an agreement for that. > > > > Out of curiosity: what does the host kernel use for its source when > > going the hypercall route? > > I believe it draws from the same entropy pool as /dev/random. OK - I'll take a look there as well. > > > +static void random_recv(void *dest, const void *src, size_t size) > > > +{ > > > + HRandomData *hrcrdp = dest; > > > + > > > + if (src && size > 0) { > > > + memcpy(&hrcrdp->val.v8[hrcrdp->received], src, size); > > > + hrcrdp->received += size; > > > + } > > > + qemu_sem_post(&hrcrdp->sem); > > > +} > > > + > > > +static target_ulong h_random(PowerPCCPU *cpu, sPAPRMachineState *spapr, > > > + target_ulong opcode, target_ulong *args) > > > +{ > > > + HRandomData hrcrd; > > > + > > > + if (!hrandom_rng) { > > > + return H_HARDWARE; > > > + } > > > + > > > + qemu_sem_init(&hrcrd.sem, 0); > > > + hrcrd.val.v64 = 0; > > > + hrcrd.received = 0; > > > + > > > + qemu_mutex_unlock_iothread(); > > > + while (hrcrd.received < 8) { > > > + rng_backend_request_entropy((RngBackend *)hrandom_rng, > > > + 8 - hrcrd.received, random_recv, &hrcrd); > > > + qemu_sem_wait(&hrcrd.sem); > > > + } > > > > Is it possible for a second hypercall to arrive while the first is > > waiting for the backend to provide data? > > Yes it is. The hypercall itself is synchronous, but you could get > concurrent calls from different guest CPUs. Hence the need for > iothread unlocking. OK, thanks! Amit