From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:37986) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Rzofx-00062G-92 for qemu-devel@nongnu.org; Tue, 21 Feb 2012 07:18:39 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Rzofr-0000XX-BP for qemu-devel@nongnu.org; Tue, 21 Feb 2012 07:18:33 -0500 Received: from mx1.redhat.com ([209.132.183.28]:32584) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Rzofr-0000XD-0c for qemu-devel@nongnu.org; Tue, 21 Feb 2012 07:18:27 -0500 Date: Tue, 21 Feb 2012 14:18:11 +0200 From: "Michael S. Tsirkin" Message-ID: <20120221121810.GA6975@redhat.com> References: <1323870202-25742-1-git-send-email-stefanb@linux.vnet.ibm.com> <1323870202-25742-3-git-send-email-stefanb@linux.vnet.ibm.com> <20120220220201.GD19278@redhat.com> <4F42E899.3010907@linux.vnet.ibm.com> <20120221031854.GA2502@redhat.com> <4F437DBE.90901@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4F437DBE.90901@linux.vnet.ibm.com> Subject: Re: [Qemu-devel] [PATCH V14 2/7] Add TPM (frontend) hardware interface (TPM TIS) to Qemu List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Berger Cc: qemu-devel@nongnu.org, andreas.niederl@iaik.tugraz.at On Tue, Feb 21, 2012 at 06:19:26AM -0500, Stefan Berger wrote: > On 02/20/2012 10:18 PM, Michael S. Tsirkin wrote: > >On Mon, Feb 20, 2012 at 07:43:05PM -0500, Stefan Berger wrote: > >>On 02/20/2012 05:02 PM, Michael S. Tsirkin wrote: > >>>On Wed, Dec 14, 2011 at 08:43:17AM -0500, Stefan Berger wrote: > >>>>+/* > >>>>+ * Send a TPM request. > >>>>+ * Call this with the state_lock held so we can sync with the receive > >>>>+ * callback. > >>>>+ */ > >>>>+static void tpm_tis_tpm_send(TPMState *s, uint8_t locty) > >>>>+{ > >>>>+ TPMTISState *tis =&s->s.tis; > >>>>+ > >>>>+ tpm_tis_show_buffer(&tis->loc[locty].w_buffer, "tpm_tis: To TPM"); > >>>>+ > >>>>+ s->command_locty = locty; > >>>>+ s->cmd_locty =&tis->loc[locty]; > >>>>+ > >>>>+ /* w_offset serves as length indicator for length of data; > >>>>+ it's reset when the response comes back */ > >>>>+ tis->loc[locty].status = TPM_TIS_STATUS_EXECUTION; > >>>>+ tis->loc[locty].sts&= ~TPM_TIS_STS_EXPECT; > >>>>+ > >>>>+ s->to_tpm_execute = true; > >>>>+ qemu_cond_signal(&s->to_tpm_cond); > >>>>+} > >>>What happens IIUC is that frondend sets to_tpm_execute > >>>and signals a condition, and backend clears it > >>>and waits on a condition. > >>> > >>>So how about moving all the signalling > >>>and locking out to backend, and have frontend > >>>invoke a callback to signal it? > >>> > >>>The whole threading thing then becomes a work-around > >>>for a backend that does not support select, > >>>instead of spilling out into frontend? > >>> > >>How do I get the lock calls (qemu_mutex_lock(&s->state_lock)) out of > >>the frontend? Do you want me to add callbacks to the backend > >>interface for locking (s->be_driver->ops->state_lock(s)) and one for > >>unlocking (s->be_driver->ops->state_unlock(tpm_be)) of the state > >>that really belongs to the front-end (state is 's') and invoke it as > >>shown in parenthesis and still keep s->state_lock around? Ideally > >>the locks would end up being 'nop's' if select() was available, but > >>in the end all backend will need to support that lock. > >> > >>[The lock protects the common structure so that the thread in the > >>backend can deliver the response to a request while the OS for > >>example polls the hardware interface for its current state.] > >> > >> > >> Stefan > > > >Well, this is just an idea, please do not take this as > >a request or anything like that. Maybe it is a dumb one. > > > >Maybe something like what you describe. > > I am starting to wonder what we're trying to achieve? We have a > producer-consumer problem here with different threads. Both threads > need to have some locking constructs along with the signalling > (condition). The backend needs to be written in a certain way to > work with the frontend, locking and signalling is a part of this. So > I don't see it makes much sense to move all that code around, > especially since there is only one backend right now. Maybe > something really great can be done once there is a 2nd backend. There are three reasons I think where I think code could be improved: 1. Your backend does not expose a reentrant asynchronous API, but another backend might. So it might be a better idea to hide this detail, and build a reentrant asynchronous API on top of what the OS supplies. 2. Your backend looks into the frontend data structures. This will make it impossible to implement another frontend. 3. I personally find it very hard to follow inter-thread communication based on shared memory and conditions if it is spread around between 2 different patches and different files. This can alternatively be addressed by documenting the synchronization/locking strategy. > >Alternatively, I imagined that you can pass a copy > >or pointer of the necessary state to the backend, > >which queues the command and wakes the worker. > >In the reverse direction, backend queues a response > >and when OS polls you dequeue it and update state. > > > > The OS doesn't necessarily need to poll. It is just one mode of > operation of the OS, the other being interrupt-driven where the > backend raises the interrupt once it has delivered the response to > the frontend. > > > Stefan So you will also need to signal the frontend when it must interrupt the guest. This is not a problem, for example you can use a qemu_eventfd object for this. > > >Can this work?