kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jike Song <jike.song@intel.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: pbonzini@redhat.com, guangrong.xiao@linux.intel.com,
	kwankhede@nvidia.com, cjia@nvidia.com, kevin.tian@intel.com,
	kvm@vger.kernel.org
Subject: Re: [v6 2/3] vfio: support notifier chain in vfio_group
Date: Sat, 19 Nov 2016 12:14:29 +0800	[thread overview]
Message-ID: <582FD1A5.1030207@intel.com> (raw)
In-Reply-To: <20161118105557.70851cd5@t450s.home>

On 11/19/2016 01:55 AM, Alex Williamson wrote:
> On Fri, 18 Nov 2016 19:01:48 +0800
> Jike Song <jike.song@intel.com> wrote:
>> Beyond vfio_iommu events, users might also be interested in
>> vfio_group events. For example, if a vfio_group is used along
>> with Qemu/KVM, whenever kvm pointer is set to/cleared from the
>> vfio_group, users could be notified.
>>
>> Currently only VFIO_GROUP_NOTIFY_SET_KVM supported.
>>
>> Cc: Alex Williamson <alex.williamson@redhat.com>
>> Cc: Kirti Wankhede <kwankhede@nvidia.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
>> Signed-off-by: Jike Song <jike.song@intel.com>
>> ---
>>  drivers/vfio/vfio.c  | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>>  include/linux/vfio.h |  6 +++++
>>  2 files changed, 75 insertions(+)
>>
>> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
>> index ec62bec..e2bb197 100644
>> --- a/drivers/vfio/vfio.c
>> +++ b/drivers/vfio/vfio.c
>> @@ -86,6 +86,8 @@ struct vfio_group {
>>  	struct mutex			unbound_lock;
>>  	atomic_t			opened;
>>  	bool				noiommu;
>> +	struct kvm			*kvm;
>> +	struct blocking_notifier_head	notifier;
>>  };
>>  
>>  struct vfio_device {
>> @@ -339,6 +341,7 @@ static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group)
>>  #ifdef CONFIG_VFIO_NOIOMMU
>>  	group->noiommu = (iommu_group_get_iommudata(iommu_group) == &noiommu);
>>  #endif
>> +	BLOCKING_INIT_NOTIFIER_HEAD(&group->notifier);
>>  
>>  	group->nb.notifier_call = vfio_iommu_group_notifier;
>>  
>> @@ -1015,6 +1018,63 @@ static long vfio_ioctl_check_extension(struct vfio_container *container,
>>  	return ret;
>>  }
>>  
>> +void vfio_group_set_kvm(struct vfio_group *group, struct kvm *kvm)
>> +{
>> +	group->kvm = kvm;
>> +	blocking_notifier_call_chain(&group->notifier,
>> +				VFIO_GROUP_NOTIFY_SET_KVM, kvm);
>> +}
>> +EXPORT_SYMBOL_GPL(vfio_group_set_kvm);
>> +
>> +static int vfio_register_group_notifier(struct vfio_group *group,
>> +					unsigned long *events,
>> +					struct notifier_block *nb)
>> +{
>> +	int ret;
>> +	bool set_kvm = false;
>> +
>> +	if (*events & VFIO_GROUP_NOTIFY_SET_KVM)
>> +		set_kvm = true;
>> +
>> +	/* clear known events */
>> +	*events &= ~VFIO_GROUP_NOTIFY_SET_KVM;
>> +
>> +	/* refuse to continue if still events remaining */
>> +	if (*events)
>> +		return -EINVAL;
>> +
>> +	if (!atomic_inc_not_zero(&group->opened))
>> +		return -EINVAL;
> 
> vfio_group.opened is only used to make sure we don't allow multiple
> opens of the group, incrementing it doesn't currently assure the group
> remains opened.  What happens if the user process releases the group in
> the midst of this?

Thanks for pointing out this.
It seems okay to think the group is open by checking group->opened,
but after that I failed to find any existing method to prevent a concurrent
vfio_group_fops_release, could you enlighten me a bit?

>> +
>> +	ret = blocking_notifier_chain_register(&group->notifier, nb);
>> +
>> +	/*
>> +	 * The attaching of kvm and vfio_group might already happen, so
>> +	 * here we replay once upon registration.
>> +	 */
>> +	if (!ret && set_kvm && group->kvm)
>> +		blocking_notifier_call_chain(&group->notifier,
>> +					VFIO_GROUP_NOTIFY_SET_KVM, group->kvm);
>> +
>> +	atomic_dec(&group->opened);
>> +
>> +	return ret;
>> +}
>> +
>> +static int vfio_unregister_group_notifier(struct vfio_group *group,
>> +					 struct notifier_block *nb)
>> +{
>> +	int ret;
>> +
>> +	if (!atomic_inc_not_zero(&group->opened))
>> +		return -EINVAL;
>> +
>> +	ret = blocking_notifier_chain_unregister(&group->notifier, nb);
>> +
>> +	atomic_dec(&group->opened);
>> +	return ret;
>> +}
>> +
>>  /* hold write lock on container->group_lock */
>>  static int __vfio_container_attach_groups(struct vfio_container *container,
>>  					  struct vfio_iommu_driver *driver,
>> @@ -1581,6 +1641,9 @@ static int vfio_group_fops_release(struct inode *inode, struct file *filep)
>>  
>>  	filep->private_data = NULL;
>>  
>> +	/* Any user didn't unregister? */
>> +	WARN_ON(group->notifier.head);
> 
> Have you tested whether the ordering is correct that the vendor driver
> sees the device release first and can therefore unregister notifiers
> before the group is released when the user process is killed?
>

Yes, mdev close happens before vfio_group fop close.

--
Thanks,
Jike
>> +
>>  	vfio_group_try_dissolve_container(group);
>>  
>>  	atomic_dec(&group->opened);
>> @@ -2088,6 +2151,9 @@ int vfio_register_notifier(struct device *dev, vfio_notify_type_t type,
>>  	case VFIO_IOMMU_NOTIFY:
>>  		ret = vfio_register_iommu_notifier(group, events, nb);
>>  		break;
>> +	case VFIO_GROUP_NOTIFY:
>> +		ret = vfio_register_group_notifier(group, events, nb);
>> +		break;
>>  	default:
>>  		ret = EINVAL;
>>  	}
>> @@ -2114,6 +2180,9 @@ int vfio_unregister_notifier(struct device *dev, vfio_notify_type_t type,
>>  	case VFIO_IOMMU_NOTIFY:
>>  		ret = vfio_unregister_iommu_notifier(group, nb);
>>  		break;
>> +	case VFIO_GROUP_NOTIFY:
>> +		ret = vfio_unregister_group_notifier(group, nb);
>> +		break;
>>  	default:
>>  		ret = -EINVAL;
>>  	}
>> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
>> index 6f3ff31..5d46e3c 100644
>> --- a/include/linux/vfio.h
>> +++ b/include/linux/vfio.h
>> @@ -119,11 +119,17 @@ extern int vfio_unregister_notifier(struct device *dev,
>>  /* each type has independent events */
>>  enum vfio_notify_type {
>>  	VFIO_IOMMU_NOTIFY = (__force vfio_notify_type_t)0,
>> +	VFIO_GROUP_NOTIFY = (__force vfio_notify_type_t)1,
>>  };
>>  
>>  /* events for VFIO_IOMMU_NOTIFY */
>>  #define VFIO_IOMMU_NOTIFY_DMA_UNMAP	BIT(0)
>>  
>> +/* events for VFIO_GROUP_NOTIFY */
>> +#define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
>> +
>> +struct kvm;
>> +extern void vfio_group_set_kvm(struct vfio_group *group, struct kvm *kvm);
>>  
>>  /*
>>   * Sub-module helpers

  reply	other threads:[~2016-11-19  4:14 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-18 11:01 [v6 0/3] plumb kvm/vfio to notify kvm:group attach/detach Jike Song
2016-11-18 11:01 ` [v6 1/3] vfio: vfio_register_notifier: classify iommu notifier Jike Song
2016-11-18 17:55   ` Alex Williamson
2016-11-19  3:32     ` Jike Song
2016-11-18 11:01 ` [v6 2/3] vfio: support notifier chain in vfio_group Jike Song
2016-11-18 17:55   ` Alex Williamson
2016-11-19  4:14     ` Jike Song [this message]
2016-11-21 16:56       ` Alex Williamson
2016-11-22  6:21         ` Jike Song
2016-11-18 11:01 ` [v6 3/3] kvm: set/clear kvm to/from vfio_group when group add/delete Jike Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=582FD1A5.1030207@intel.com \
    --to=jike.song@intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=cjia@nvidia.com \
    --cc=guangrong.xiao@linux.intel.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=pbonzini@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).