From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:43006) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S3RCr-0001Or-ER for qemu-devel@nongnu.org; Fri, 02 Mar 2012 07:03:35 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1S3RCp-0001qc-6m for qemu-devel@nongnu.org; Fri, 02 Mar 2012 07:03:29 -0500 Received: from e35.co.us.ibm.com ([32.97.110.153]:49597) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S3RCp-0001pZ-05 for qemu-devel@nongnu.org; Fri, 02 Mar 2012 07:03:27 -0500 Received: from /spool/local by e35.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 2 Mar 2012 05:03:16 -0700 Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com [9.17.195.228]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 0EC533E4004E for ; Fri, 2 Mar 2012 05:02:27 -0700 (MST) Received: from d03av06.boulder.ibm.com (d03av06.boulder.ibm.com [9.17.195.245]) by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q22C2MoA143014 for ; Fri, 2 Mar 2012 05:02:24 -0700 Received: from d03av06.boulder.ibm.com (loopback [127.0.0.1]) by d03av06.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q22C2a9D010525 for ; Fri, 2 Mar 2012 05:02:36 -0700 Message-ID: <4F50B6CD.8000002@linux.vnet.ibm.com> Date: Fri, 02 Mar 2012 07:02:21 -0500 From: Stefan Berger MIME-Version: 1.0 References: <1323870202-25742-1-git-send-email-stefanb@linux.vnet.ibm.com> <1323870202-25742-3-git-send-email-stefanb@linux.vnet.ibm.com> <20120220220201.GD19278@redhat.com> <4F42E899.3010907@linux.vnet.ibm.com> <20120221031854.GA2502@redhat.com> <4F437DBE.90901@linux.vnet.ibm.com> <20120221121810.GA6975@redhat.com> <4F43B2B6.1040109@linux.vnet.ibm.com> <20120221195812.GB9062@redhat.com> <4F441B08.6000306@linux.vnet.ibm.com> <20120221230845.GD9062@redhat.com> In-Reply-To: <20120221230845.GD9062@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH V14 2/7] Add TPM (frontend) hardware interface (TPM TIS) to Qemu List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: qemu-devel@nongnu.org, andreas.niederl@iaik.tugraz.at On 02/21/2012 06:08 PM, Michael S. Tsirkin wrote: > On Tue, Feb 21, 2012 at 05:30:32PM -0500, Stefan Berger wrote: >> On 02/21/2012 02:58 PM, Michael S. Tsirkin wrote: >>> On Tue, Feb 21, 2012 at 10:05:26AM -0500, Stefan Berger wrote: >>>> On 02/21/2012 07:18 AM, Michael S. Tsirkin wrote: >>>> When the backend delivers the response it checks whether the >>>> interface is used in interrupt mode and raises the interrupt. >>> IMO it's the frontend that should send interrupts. >>> Yes it kind of works for isa anyway, but e.g. pci >>> needs to update configuration etc. >>> >> The code that causes the interrupt to be raised is in the frontend. >> The function doing that is invoked via callback from the backend. >> This should be ok? > Absolutely. > >>>> The >>>> backend enters the frontend code with a callback. In this function >>>> also a signal is sent that may wake up the main thread that, upon >>>> suspend, may be waiting for the last command to be processed and be >>>> sleeping on a condition variable. >>>> >>>> I now added a function to the backend interface that is invoked by >>>> the frontend to notify the backend of a TPM request. The backend >>>> code can then either notify a thread (passthrough and libtpms >>>> driver) or create a response right away and invoke that callback to >>>> the front-end to deliver the response (null driver). How frontend >>>> and backend handle notifications is isolated to the frontend and >>>> backend with some backends (libtpms, passthough) sharing the code >>>> for how the notification is done. >>>> >>>> Stefan >>> Right. And all the locking/threading can then be internal to the backend. >>> >> Do you really want to replace code like this in the frontend: >> >> qemu_mutex_lock(&s->state_lock) >> [...] >> qemu_mutex_unlock(&s->state_lock) >> >> >> with >> >> >> s->be_driver->ops->state_lock(s->be_driver) >> [...] >> s->be_driver->ops->state_unlock(s->be_driver)) >> >> >> where the backend starts protecting frontend data structures ? > It's ugly I hope you can do something saner: > ops->send_command(....) > with command encapsulating the relevant info. > >> At the moment there are two backends that need threading: the >> libtpms and passthrough backends. Both will require locking of >> datastructures that belong to the frontend. Only the null driver >> doesn't need a thread and the main thread can call into the backend, >> create the response and call via callback into the frontend to >> deliver the repsonse. If structures are protected via mutxes then >> only the NULL driver (which we don't want anyway) may end up >> grabbing mutexes that really aren't necessary while the two other >> backends need them. I don't see the mitextes as problematic. The >> frontend at least protects its data structures for the callbacks and >> other API calls it offers and they simply are thread-safe. >> >> Stefan > Worst case, you can take a qemu mutex. Is tpm very > performance-sensitive to make contention on that > lock a problem? > Having instrumented the code with qemu_mutex_trylock() and a counter for failed attempts with subsequent qemu_mutex_lock() practically shows no lock contention at all for either polling or IRQ mode being used by the Linux driver. IRQ mode: 4 failed lock attempts out of 1726208 locks -> 0.00% polling mode: 2 failed lock attempts out of 1545216 locks -> 0.00% I used the libtpms based backend with and ran IMA and a test suite in the VM. Stefan