From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42361) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bDcuH-0007vM-W7 for qemu-devel@nongnu.org; Thu, 16 Jun 2016 15:24:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bDcuD-0003LF-O0 for qemu-devel@nongnu.org; Thu, 16 Jun 2016 15:24:48 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47502) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bDcuD-0003Ky-EN for qemu-devel@nongnu.org; Thu, 16 Jun 2016 15:24:45 -0400 Date: Thu, 16 Jun 2016 20:24:39 +0100 From: "Dr. David Alan Gilbert" Message-ID: <20160616192439.GE19710@work-vm> References: <945CA011AD5F084CBEA3E851C0AB28894B8C3A14@SHSMSX101.ccr.corp.intel.com> <575E92DB.3080904@linux.vnet.ibm.com> <20160615193019.GB7300@work-vm> <5761C092.5070702@linux.vnet.ibm.com> <20160616080520.GA2249@work-vm> <5762BFE5.9070906@linux.vnet.ibm.com> <20160616152221.GD2249@work-vm> <5762C739.7060806@linux.vnet.ibm.com> <20160616175433.GC19710@work-vm> <5762F348.5050105@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5762F348.5050105@linux.vnet.ibm.com> Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Berger Cc: Stefan Berger , "mst@redhat.com" , "qemu-devel@nongnu.org" , "hagen.lauer@huawei.com" , "Xu, Quan" , "silviu.vlasceanu@gmail.com" , "SERBAN, CRISTINA" , "SHIH, CHING C" , berrange@redhat.com * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote: > On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote: > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote: > > > On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote: > > > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote: > > > > > On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote: > > > > > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote: > > > > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote: > > > > > > > > > > > > > > > > > > > > So what was the multi-instance vTPM proxy driver patch set about? > > > > > > > That's for containers. > > > > > > Why have the two mechanisms? Can you explain how the multi-instance > > > > > > proxy works; my brief reading when I saw your patch series seemed > > > > > > to suggest it could be used instead of CUSE for the non-container case. > > > > > The multi-instance vtpm proxy driver basically works through usage of an > > > > > ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair. > > > > > The front-end is a new /dev/tpm%d device that then can be moved into the > > > > > container (mknod + device cgroup setup). The backend is an anonymous file > > > > > descriptor that is to be passed to a TPM emulator for reading TPM requests > > > > > coming in from that /dev/tpm%d and returning responses to. Since it is > > > > > implemented as a kernel driver, we can hook it into the Linux Integrity > > > > > Measurement Architecture (IMA) and have it be used by IMA in place of a > > > > > hardware TPM driver. There's ongoing work in the area of namespacing support > > > > > for IMA to have an independent IMA instance per container so that this can > > > > > be used. > > > > > > > > > > A TPM does not only have a data channel (/dev/tpm%d) but also a control > > > > > channel, which is primarily implemented in its hardware interface and is > > > > > typically not fully accessible to user space. The vtpm proxy driver _only_ > > > > > supports the data channel through which it basically relays TPM commands and > > > > > responses from user space to the TPM emulator. The control channel is > > > > > provided by the software emulator through an additional TCP or UnixIO socket > > > > > or in case of CUSE through ioctls. The control channel allows to reset the > > > > > TPM when the container/VM is being reset or set the locality of a command or > > > > > retrieve the state of the vTPM (for suspend) and set the state of the vTPM > > > > > (for resume) among several other things. The commands for the control > > > > > channel are defined here: > > > > > > > > > > https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h > > > > > > > > > > For a container we would require that its management stack initializes and > > > > > resets the vTPM when the container is rebooted. (These are typically > > > > > operations that are done through pulses on the motherboard.) > > > > > > > > > > In case of QEMU we would need to have more access to the control channel, > > > > > which includes initialization and reset of the vTPM, getting and setting its > > > > > state for suspend/resume/migration, setting the locality of commands, etc., > > > > > so that all low-level functionality is accessible to the emulator (QEMU). > > > > > The proxy driver does not help with this but we should use the swtpm > > > > > implementation that either has that CUSE interface with control channel > > > > > (through ioctls) or provides UnixIO and TCP sockets for the control channel. > > > > OK, that makes sense; does the control interface need to be handled by QEMU > > > > or by libvirt or both? > > > The control interface needs to be handled primarily by QEMU. > > > > > > In case of the libvirt implementation I am running an external program > > > swtpm_ioctl that uses the control channel to gracefully shut down any > > > existing running TPM emulator whose device name happens to have the same > > > name as the device of the TPM emulator that is to be created. So it cleans > > > up before starting a new TPM emulator just to make sure that that new TPM > > > instance can be started. Detail... > > > > > > > Either way, I think you're saying that with your kernel interface + a UnixIO > > > > socket you can avoid the CUSE stuff? > > > So in case of QEMU you don't need that new kernel device driver -- it's > > > primarily meant for containers. For QEMU one would start the TPM emulator > > > and make sure that QEMU has access to the data and control channels, which > > > are now offered as > > > > > > - CUSE interface with ioctl > > > - TCP + TCP > > > - UnixIO + TCP > > > - TCP + UnioIO > > > - UnixIO + UnixIO > > > - file descriptors passed from invoker > > OK, I'm trying to remember back; I'll admit to not having > > liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM > > side require a lot of code to add a qemu interface that wasn't > > ioctl? > > Adding these additional interface to the TPM was a bigger effort, yes. Right, so that code isn't in upstream qemu is it? > > Doesn't using the kernel driver give you the benefit of both worlds, > > i.e. the non-control side in QEMU is unchanged. > > Yes. I am not sure what you are asking, though. A control channel is > necessary no matter what. The kernel driver talks to /dev/vtpm- via > a file descriptor and uses commands sent through ioctl for the control > channel. Whether QEMU now uses an fd that is a UnixIO or TCP socket to send > the commands to the TPM or an fd that uses CUSE, doesn't matter much on the > side of QEMU. The control channel may be a bit different when using ioctl > versus an fd (for UnixIO or TCP) or ioctl. I am not sure why we would send > commands through that vTPM proxy driver in case of QEMU rather than talking > to the TPM emulator directly. Right, so what I'm thinking is: a) QEMU talks to /dev/vtpm-whatever for the normal TPM stuff no/little code is needed to be added to qemu upstream for that b) Then you talk to the control side via an fd/socket you need to add your existing code for that. So that doesn't depend on CUSE, it doesn't depend on your particular vTPM implementation (except for the control socket data, but then hopefully that's pretty abstract); all good? Dave > > Stefan > > > > > Dave > > > > > Stefan > > > > > -- > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK > > > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK