From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:52309) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S4a7d-0008Ja-Ml for qemu-devel@nongnu.org; Mon, 05 Mar 2012 10:46:55 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1S4a7W-0007eD-NB for qemu-devel@nongnu.org; Mon, 05 Mar 2012 10:46:49 -0500 Received: from e39.co.us.ibm.com ([32.97.110.160]:43741) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S4a7W-0007dZ-GD for qemu-devel@nongnu.org; Mon, 05 Mar 2012 10:46:42 -0500 Received: from /spool/local by e39.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 5 Mar 2012 08:46:37 -0700 Received: from d01relay05.pok.ibm.com (d01relay05.pok.ibm.com [9.56.227.237]) by d01dlp03.pok.ibm.com (Postfix) with ESMTP id 9DFBBC90068 for ; Mon, 5 Mar 2012 10:46:33 -0500 (EST) Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay05.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q25FkULO234910 for ; Mon, 5 Mar 2012 10:46:30 -0500 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q25FkKoQ007187 for ; Mon, 5 Mar 2012 12:46:20 -0300 Message-ID: <4F54DFBF.5040204@linux.vnet.ibm.com> Date: Mon, 05 Mar 2012 10:46:07 -0500 From: Stefan Berger MIME-Version: 1.0 References: <20120220220201.GD19278@redhat.com> <4F42E899.3010907@linux.vnet.ibm.com> <20120221031854.GA2502@redhat.com> <4F437DBE.90901@linux.vnet.ibm.com> <20120221121810.GA6975@redhat.com> <4F43B2B6.1040109@linux.vnet.ibm.com> <20120221195812.GB9062@redhat.com> <4F441B08.6000306@linux.vnet.ibm.com> <20120221230845.GD9062@redhat.com> <4F50B6CD.8000002@linux.vnet.ibm.com> <20120304225901.GA22098@redhat.com> In-Reply-To: <20120304225901.GA22098@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH V14 2/7] Add TPM (frontend) hardware interface (TPM TIS) to Qemu List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org, "Michael S. Tsirkin" Cc: Andreas Niederl Resending with people properly cc'ed. On 03/04/2012 05:59 PM, Michael S. Tsirkin wrote: > On Fri, Mar 02, 2012 at 07:02:21AM -0500, Stefan Berger wrote: >> Having instrumented the code with qemu_mutex_trylock() and a counter >> for failed attempts with subsequent qemu_mutex_lock() practically >> shows no lock contention at all for either polling or IRQ mode being >> used by the Linux driver. >> >> IRQ mode: 4 failed lock attempts out of 1726208 locks -> 0.00% >> polling mode: 2 failed lock attempts out of 1545216 locks -> 0.00% >> >> I used the libtpms based backend with and ran IMA and a test suite >> in the VM. >> >> >> Stefan > so maybe you can just do qemu_mutex_lock_iothread similar > to what kvm does. > I have tried that now. I ended up having to remove the locking from the TIS except for one place where the result is delivered from the backend to the frontend via the thread. The reason why I had to remove the lock is because the pthread library detects a deadlock, likely due to attempted double-locking of the global lock that is not allowed to be locked more than once. That by itself is not the scary part but here's the stacktrace while I still had the lock in there: #0 0x00007ffff37e2215 in __GI_raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 #1 0x00007ffff37e3b2b in __GI_abort () at abort.c:93 #2 0x00005555555c1069 in error_exit (err=, msg= 0x55555583f350 "qemu_mutex_lock") at qemu-thread-posix.c:25 #3 0x00005555556daf10 in qemu_mutex_lock (mutex=) at qemu-thread-posix.c:56 #4 0x00005555557a6d43 in tpm_tis_mmio_read (opaque=0x555556e06a40, addr=3840, size=) at /home/stefanb/qemu/qemu-git.pt/hw/tpm_tis.c:448 #5 0x000055555575e157 in memory_region_read_accessor (opaque=, addr=, value=0x7fffef596c60, size=, shift=0, mask=4294967295) at /home/stefanb/qemu/qemu-git.pt/memory.c:259 #6 0x000055555575e270 in access_with_adjusted_size (addr=3840, value= 0x7fffef596c60, size=4, access_size_min=, access_size_max=, access= 0x55555575e120 , opaque=0x555556e06af0) at /home/stefanb/qemu/qemu-git.pt/memory.c:304 #7 0x0000555555762b80 in memory_region_dispatch_read1 (size=4, addr=3840, mr= 0x555556e06af0) at /home/stefanb/qemu/qemu-git.pt/memory.c:928 #8 memory_region_dispatch_read (size=4, addr=3840, mr=0x555556e06af0) at /home/stefanb/qemu/qemu-git.pt/memory.c:960 #9 io_mem_read (io_index=, addr=3840, size=4) at /home/stefanb/qemu/qemu-git.pt/memory.c:1558 #10 0x000055555573a876 in cpu_physical_memory_rw (addr=4275310336, buf= 0x7ffff7ff4028 "", len=4, is_write=0) at /home/stefanb/qemu/qemu-git.pt/exec.c:3620 #11 0x0000555555755225 in kvm_cpu_exec (env=0x555556561100) at /home/stefanb/qemu/qemu-git.pt/kvm-all.c:1173 #12 0x0000555555731201 in qemu_kvm_cpu_thread_fn (arg=0x555556561100) at /home/stefanb/qemu/qemu-git.pt/cpus.c:732 #13 0x00007ffff64a0b41 in start_thread (arg=0x7fffef597700) at pthread_create.c:305 #14 0x00007ffff388c49d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115 It may say qemu_mutex_lock but it is really calling qemu_mutex_lock_iothread. The initial lock happends in kvm_cpu_exec (#11). The subsequent lock then occurs in tpm_tis_mmio_read (#4), so far away from the first lock. Well, if something in the code there changes we may end up having a race in the TPM TIS driver if we end up factoring the lock out. So personally I would prefer local locking where we can withstand changes in other parts of the code unless one can be sure that that global lock is never going to go away and that all code paths reaching the TIS driver function that need the locking have that global lock held. Stefan