From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57996) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bzOYV-0002pV-Ur for qemu-devel@nongnu.org; Wed, 26 Oct 2016 09:47:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bzOYS-0004Ds-QZ for qemu-devel@nongnu.org; Wed, 26 Oct 2016 09:47:48 -0400 Received: from mga07.intel.com ([134.134.136.100]:62603) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1bzOYS-0004D6-Hh for qemu-devel@nongnu.org; Wed, 26 Oct 2016 09:47:44 -0400 Message-ID: <5810B33C.1060500@intel.com> Date: Wed, 26 Oct 2016 21:44:28 +0800 From: Jike Song MIME-Version: 1.0 References: <1259cdba-c137-c3da-abe2-ecf51aec6738@linux.intel.com> <523e1446-75f1-fe3a-d818-f7d238d57751@redhat.com> <5800B579.9000705@intel.com> <20161014084158.623087aa@t450s.home> <20161014084601.2a50ba87@t450s.home> <20161014163545.GA6121@nvidia.com> <20161014105124.42b438a6@t450s.home> <20161014221901.GA8865@nvidia.com> <20161017100229.1474ae33@t450s.home> <580617BD.8000300@intel.com> <20161018085918.61ec0e93@t450s.home> <5806DB2D.6090306@intel.com> <2f04a53d-261c-7fb5-6825-117da6a1307d@intel.com> <06340187-61d8-ed7a-e40d-264ca3eb4b37@linux.intel.com> <6067bf7d-42ba-0ffd-5131-da74f60296d4@redhat.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 1/2] KVM: page track: add a new notifier type: track_flush_slot List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: Xiao Guangrong , Xiao Guangrong , Alex Williamson , "Tian, Kevin" , Neo Jia , kvm@vger.kernel.org, qemu-devel , Xiaoguang Chen , Kirti Wankhede On 10/21/2016 01:06 AM, Paolo Bonzini wrote: > On 20/10/2016 03:48, Xiao Guangrong wrote: >> I understood that KVM side is safe, however, vfio side is independent with >> kvm and the user of usrdata can fetch kvm struct at any time, consider >> this scenario: >> >> CPU 0 CPU 1 >> KVM: VFIO/userdata user >> kvm_ioctl_create_device >> get_kvm() >> vfio_group_get_usrdata(vfio_group) >> kvm_device_release >> put_kvm() >> !!! kvm refcount has gone >> use KVM struct >> >> Then, the user of userdata have fetched kvm struct but the refcount has >> already gone. > > vfio_group_set_usrdata (actually) kvm_vfio_group_set_kvm has called > kvm_get_kvm too, however. What you need is a mutex that is taken by > vfio_group_set_usrdata and by the callers of vfio_group_get_usrdata. Hi Paolo & Guangrong, I walked the whole thread and became a little nervous: I don't want to introduce a global mutex. The problem is, as I understand, vfio_group_get_usrdata() returns a KVM pointer but it may be stale. To make the pointer always valid, it can call kvm_get_kvm() *before* return the pointer. I would apologize in advance if this idea turns out totally nonsense, but hey, please kindly help fix my whim :-) [vfio.h] struct vfio_usrdata { void *data; void (*get)(void *data); void (*put)(void *data) }; vfio_group { ... vfio_usrdata *usrdata; [kvm.ko] struvt vfio_usrdata kvmdata = { .data = kvm, .get = kvm_get_kvm, .put = kvm_put_kvm, }; fn = symbol_get(vfio_group_set_usrdata) fn(vfio_group, &kvmdata) [vfio.ko] vfio_group_set_usrdata lock vfio_group->d = kvmdata unlock void *vfio_group_get_usrdata lock struct vfio_usrdata *d = vfio_group->usrdata; d->get(d->data); unlock return d->data; void vfio_group_put_usrdata lock struct vfio_usrdata *d = vfio_group->usrdata; d->put(d->data) unlock [kvmgt.ko] call vfio_group_get_usrdata to get kvm, call vfio_group_put_usrdata to release it *never* call kvm_get_kvm/kvm_put_kvm -- Thanks, Jike