public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: Synchronize accesses to the kvm->devices list
@ 2016-08-03 10:15 Christoffer Dall
  2016-08-03 10:39 ` [PATCH v2] " Christoffer Dall
  0 siblings, 1 reply; 6+ messages in thread
From: Christoffer Dall @ 2016-08-03 10:15 UTC (permalink / raw)
  To: kvm
  Cc: Andre Przywara, Paolo Bonzini, Radim Krčmář,
	Alexander Graf, Christoffer Dall

The list of KVM devices is not currently protected with any lock.  As
far as I can see nothing prevents multiple threads from creating devices
on the KVM fd simultaenously, potentially leading to corruption.
Further, we already have VFIO looping over this list without any form of
synchronization, and we are about to add similar functionality for the
ITS on arm64.

Note that the traversal in kvm_destroy_devices does not appear to be a
problem because only the last reference to the KVM struct can actually
perform this action.

Finally, I change the list_add to a list_add_rcu and change the vfio
code to use list_for_each_entry_rcu to safely iterate the list.  This
could also be done holding the devices lock, but we don't know if all
callers can hold a spinlock while iterating the list, so RCU seems to
be an elegant solution.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 include/linux/kvm_host.h |  1 +
 virt/kvm/kvm_main.c      | 14 +++++++++++++-
 virt/kvm/vfio.c          |  2 +-
 3 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index aafd702..dd20af9 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -421,6 +421,7 @@ struct kvm {
 	long mmu_notifier_count;
 #endif
 	long tlbs_dirty;
+	spinlock_t devices_lock; /* protects the devices list */
 	struct list_head devices;
 	struct dentry *debugfs_dentry;
 	struct kvm_stat_data **debugfs_stat_data;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 61b31a5..7cfee41 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -616,6 +616,7 @@ static struct kvm *kvm_create_vm(unsigned long type)
 	mutex_init(&kvm->irq_lock);
 	mutex_init(&kvm->slots_lock);
 	atomic_set(&kvm->users_count, 1);
+	spin_lock_init(&kvm->devices_lock);
 	INIT_LIST_HEAD(&kvm->devices);
 
 	r = kvm_arch_init_vm(kvm, type);
@@ -694,6 +695,11 @@ static void kvm_destroy_devices(struct kvm *kvm)
 {
 	struct kvm_device *dev, *tmp;
 
+	/*
+	 * We do not need to take the devices_lock here, because nobody else
+	 * has a reference to the struct kvm at this point and therefore
+	 * cannot access the devices list anyhow.
+	 */
 	list_for_each_entry_safe(dev, tmp, &kvm->devices, vm_node) {
 		list_del(&dev->vm_node);
 		dev->ops->destroy(dev);
@@ -2842,7 +2848,13 @@ static int kvm_ioctl_create_device(struct kvm *kvm,
 		return ret;
 	}
 
-	list_add(&dev->vm_node, &kvm->devices);
+	/*
+	 * Take the devices_lock to synchronize multiple adders.
+	 * Use list_add_rcu to allow lockless traversal of this list.
+	 */
+	spin_lock(&kvm->devices_lock);
+	list_add_rcu(&dev->vm_node, &kvm->devices);
+	spin_unlock(&kvm->devices_lock);
 	kvm_get_kvm(kvm);
 	cd->fd = ret;
 	return 0;
diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c
index 1dd087d..4470fa1 100644
--- a/virt/kvm/vfio.c
+++ b/virt/kvm/vfio.c
@@ -268,7 +268,7 @@ static int kvm_vfio_create(struct kvm_device *dev, u32 type)
 	struct kvm_vfio *kv;
 
 	/* Only one VFIO "device" per VM */
-	list_for_each_entry(tmp, &dev->kvm->devices, vm_node)
+	list_for_each_entry_rcu(tmp, &dev->kvm->devices, vm_node)
 		if (tmp->ops == &kvm_vfio_ops)
 			return -EBUSY;
 
-- 
2.9.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2] KVM: Synchronize accesses to the kvm->devices list
  2016-08-03 10:15 [PATCH] KVM: Synchronize accesses to the kvm->devices list Christoffer Dall
@ 2016-08-03 10:39 ` Christoffer Dall
  2016-08-03 11:24   ` Paolo Bonzini
  0 siblings, 1 reply; 6+ messages in thread
From: Christoffer Dall @ 2016-08-03 10:39 UTC (permalink / raw)
  To: kvm
  Cc: Andre Przywara, Paolo Bonzini, Radim Krčmář,
	Alexander Graf, Christoffer Dall

The list of KVM devices is not currently protected with any lock.  As
far as I can see nothing prevents multiple threads from creating devices
on the KVM fd simultaenously, potentially leading to corruption.
Further, we already have VFIO looping over this list without any form of
synchronization, and we are about to add similar functionality for the
ITS on arm64.

Note that the traversal in kvm_destroy_devices does not appear to be a
problem because only the last reference to the KVM struct can actually
perform this action.

I use a mutex to protect the list, since the arm64 ITS code is going to
need to iterate over the devices (to find all ITSes) and register KVM IO
devices for each one, which in turn grabs another mutex and kmallocs
stuff.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
---
Changes since v1:
  - My RCU knowledge was too rusty and was both incorrectly implemented
    and would not address the arm64 need.  Just use a mutex instead.

 include/linux/kvm_host.h |  1 +
 virt/kvm/kvm_main.c      |  8 ++++++++
 virt/kvm/vfio.c          | 11 +++++++++--
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index aafd702..de32196 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -421,6 +421,7 @@ struct kvm {
 	long mmu_notifier_count;
 #endif
 	long tlbs_dirty;
+	struct mutex devices_lock; /* protects the devices list */
 	struct list_head devices;
 	struct dentry *debugfs_dentry;
 	struct kvm_stat_data **debugfs_stat_data;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 61b31a5..33e0579 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -616,6 +616,7 @@ static struct kvm *kvm_create_vm(unsigned long type)
 	mutex_init(&kvm->irq_lock);
 	mutex_init(&kvm->slots_lock);
 	atomic_set(&kvm->users_count, 1);
+	mutex_init(&kvm->devices_lock);
 	INIT_LIST_HEAD(&kvm->devices);
 
 	r = kvm_arch_init_vm(kvm, type);
@@ -694,6 +695,11 @@ static void kvm_destroy_devices(struct kvm *kvm)
 {
 	struct kvm_device *dev, *tmp;
 
+	/*
+	 * We do not need to take the devices_lock here, because nobody else
+	 * has a reference to the struct kvm at this point and therefore
+	 * cannot access the devices list anyhow.
+	 */
 	list_for_each_entry_safe(dev, tmp, &kvm->devices, vm_node) {
 		list_del(&dev->vm_node);
 		dev->ops->destroy(dev);
@@ -2842,7 +2848,9 @@ static int kvm_ioctl_create_device(struct kvm *kvm,
 		return ret;
 	}
 
+	mutex_lock(&kvm->devices_lock);
 	list_add(&dev->vm_node, &kvm->devices);
+	mutex_unlock(&kvm->devices_lock);
 	kvm_get_kvm(kvm);
 	cd->fd = ret;
 	return 0;
diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c
index 1dd087d..ce5f859 100644
--- a/virt/kvm/vfio.c
+++ b/virt/kvm/vfio.c
@@ -266,11 +266,18 @@ static int kvm_vfio_create(struct kvm_device *dev, u32 type)
 {
 	struct kvm_device *tmp;
 	struct kvm_vfio *kv;
+	int err = 0;
 
 	/* Only one VFIO "device" per VM */
+	mutex_lock(&dev->kvm->devices_lock);
 	list_for_each_entry(tmp, &dev->kvm->devices, vm_node)
-		if (tmp->ops == &kvm_vfio_ops)
-			return -EBUSY;
+		if (tmp->ops == &kvm_vfio_ops) {
+			err = -EBUSY;
+			break;
+		}
+	mutex_unlock(&dev->kvm->devices_lock);
+	if (err)
+		return err;
 
 	kv = kzalloc(sizeof(*kv), GFP_KERNEL);
 	if (!kv)
-- 
2.9.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] KVM: Synchronize accesses to the kvm->devices list
  2016-08-03 10:39 ` [PATCH v2] " Christoffer Dall
@ 2016-08-03 11:24   ` Paolo Bonzini
  2016-08-03 11:51     ` Christoffer Dall
  2016-08-08 19:40     ` Christoffer Dall
  0 siblings, 2 replies; 6+ messages in thread
From: Paolo Bonzini @ 2016-08-03 11:24 UTC (permalink / raw)
  To: Christoffer Dall, kvm
  Cc: Andre Przywara, Radim Krčmář, Alexander Graf



On 03/08/2016 12:39, Christoffer Dall wrote:
> The list of KVM devices is not currently protected with any lock.  As
> far as I can see nothing prevents multiple threads from creating devices
> on the KVM fd simultaenously, potentially leading to corruption.
> Further, we already have VFIO looping over this list without any form of
> synchronization, and we are about to add similar functionality for the
> ITS on arm64.
> 
> Note that the traversal in kvm_destroy_devices does not appear to be a
> problem because only the last reference to the KVM struct can actually
> perform this action.
> 
> I use a mutex to protect the list, since the arm64 ITS code is going to
> need to iterate over the devices (to find all ITSes) and register KVM IO
> devices for each one, which in turn grabs another mutex and kmallocs
> stuff.
> 
> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> ---
> Changes since v1:
>   - My RCU knowledge was too rusty and was both incorrectly implemented
>     and would not address the arm64 need.  Just use a mutex instead.
> 
>  include/linux/kvm_host.h |  1 +
>  virt/kvm/kvm_main.c      |  8 ++++++++
>  virt/kvm/vfio.c          | 11 +++++++++--
>  3 files changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index aafd702..de32196 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -421,6 +421,7 @@ struct kvm {
>  	long mmu_notifier_count;
>  #endif
>  	long tlbs_dirty;
> +	struct mutex devices_lock; /* protects the devices list */
>  	struct list_head devices;
>  	struct dentry *debugfs_dentry;
>  	struct kvm_stat_data **debugfs_stat_data;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 61b31a5..33e0579 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -616,6 +616,7 @@ static struct kvm *kvm_create_vm(unsigned long type)
>  	mutex_init(&kvm->irq_lock);
>  	mutex_init(&kvm->slots_lock);
>  	atomic_set(&kvm->users_count, 1);
> +	mutex_init(&kvm->devices_lock);
>  	INIT_LIST_HEAD(&kvm->devices);
>  
>  	r = kvm_arch_init_vm(kvm, type);
> @@ -694,6 +695,11 @@ static void kvm_destroy_devices(struct kvm *kvm)
>  {
>  	struct kvm_device *dev, *tmp;
>  
> +	/*
> +	 * We do not need to take the devices_lock here, because nobody else
> +	 * has a reference to the struct kvm at this point and therefore
> +	 * cannot access the devices list anyhow.
> +	 */
>  	list_for_each_entry_safe(dev, tmp, &kvm->devices, vm_node) {
>  		list_del(&dev->vm_node);
>  		dev->ops->destroy(dev);
> @@ -2842,7 +2848,9 @@ static int kvm_ioctl_create_device(struct kvm *kvm,
>  		return ret;
>  	}
>  
> +	mutex_lock(&kvm->devices_lock);
>  	list_add(&dev->vm_node, &kvm->devices);
> +	mutex_unlock(&kvm->devices_lock);

This is not enough for the VFIO test to be robust.  I think it's easier
to move ops->create entirely under the kvm_lock.  This requires some
auditing of the create functions, but many of them (e.g.
kvm_vgic_create) already use that lock.

The only complex case is xics.  It takes kvm_lock only for part of the
initialization, and the call to xics_debugfs_init would be better moved
outside kvm_lock.  For that we can add an ops->init(dev) function that
is called outside kvm_lock and cannot fail.

For the others:

- vgic creation is already entirely under kvm_lock, only kvm_vgic_create
needs adjustment

- mpic uses kvm_set_irq_routing, which is okay because irq_lock nests
inside kvm_lock

- flic_create is trivial would have a multiple-creation bug fixed

- likewise for kvm_vfio_create

Would you like to do this?

Paolo

>  	kvm_get_kvm(kvm);
>  	cd->fd = ret;
>  	return 0;
> diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c
> index 1dd087d..ce5f859 100644
> --- a/virt/kvm/vfio.c
> +++ b/virt/kvm/vfio.c
> @@ -266,11 +266,18 @@ static int kvm_vfio_create(struct kvm_device *dev, u32 type)
>  {
>  	struct kvm_device *tmp;
>  	struct kvm_vfio *kv;
> +	int err = 0;
>  
>  	/* Only one VFIO "device" per VM */
> +	mutex_lock(&dev->kvm->devices_lock);
>  	list_for_each_entry(tmp, &dev->kvm->devices, vm_node)
> -		if (tmp->ops == &kvm_vfio_ops)
> -			return -EBUSY;
> +		if (tmp->ops == &kvm_vfio_ops) {
> +			err = -EBUSY;
> +			break;
> +		}
> +	mutex_unlock(&dev->kvm->devices_lock);
> +	if (err)
> +		return err;
>  
>  	kv = kzalloc(sizeof(*kv), GFP_KERNEL);
>  	if (!kv)
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] KVM: Synchronize accesses to the kvm->devices list
  2016-08-03 11:24   ` Paolo Bonzini
@ 2016-08-03 11:51     ` Christoffer Dall
  2016-08-08 19:40     ` Christoffer Dall
  1 sibling, 0 replies; 6+ messages in thread
From: Christoffer Dall @ 2016-08-03 11:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Andre Przywara, Radim Krčmář, Alexander Graf

On Wed, Aug 03, 2016 at 01:24:41PM +0200, Paolo Bonzini wrote:
> 
> 
> On 03/08/2016 12:39, Christoffer Dall wrote:
> > The list of KVM devices is not currently protected with any lock.  As
> > far as I can see nothing prevents multiple threads from creating devices
> > on the KVM fd simultaenously, potentially leading to corruption.
> > Further, we already have VFIO looping over this list without any form of
> > synchronization, and we are about to add similar functionality for the
> > ITS on arm64.
> > 
> > Note that the traversal in kvm_destroy_devices does not appear to be a
> > problem because only the last reference to the KVM struct can actually
> > perform this action.
> > 
> > I use a mutex to protect the list, since the arm64 ITS code is going to
> > need to iterate over the devices (to find all ITSes) and register KVM IO
> > devices for each one, which in turn grabs another mutex and kmallocs
> > stuff.
> > 
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > ---
> > Changes since v1:
> >   - My RCU knowledge was too rusty and was both incorrectly implemented
> >     and would not address the arm64 need.  Just use a mutex instead.
> > 
> >  include/linux/kvm_host.h |  1 +
> >  virt/kvm/kvm_main.c      |  8 ++++++++
> >  virt/kvm/vfio.c          | 11 +++++++++--
> >  3 files changed, 18 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index aafd702..de32196 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -421,6 +421,7 @@ struct kvm {
> >  	long mmu_notifier_count;
> >  #endif
> >  	long tlbs_dirty;
> > +	struct mutex devices_lock; /* protects the devices list */
> >  	struct list_head devices;
> >  	struct dentry *debugfs_dentry;
> >  	struct kvm_stat_data **debugfs_stat_data;
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 61b31a5..33e0579 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -616,6 +616,7 @@ static struct kvm *kvm_create_vm(unsigned long type)
> >  	mutex_init(&kvm->irq_lock);
> >  	mutex_init(&kvm->slots_lock);
> >  	atomic_set(&kvm->users_count, 1);
> > +	mutex_init(&kvm->devices_lock);
> >  	INIT_LIST_HEAD(&kvm->devices);
> >  
> >  	r = kvm_arch_init_vm(kvm, type);
> > @@ -694,6 +695,11 @@ static void kvm_destroy_devices(struct kvm *kvm)
> >  {
> >  	struct kvm_device *dev, *tmp;
> >  
> > +	/*
> > +	 * We do not need to take the devices_lock here, because nobody else
> > +	 * has a reference to the struct kvm at this point and therefore
> > +	 * cannot access the devices list anyhow.
> > +	 */
> >  	list_for_each_entry_safe(dev, tmp, &kvm->devices, vm_node) {
> >  		list_del(&dev->vm_node);
> >  		dev->ops->destroy(dev);
> > @@ -2842,7 +2848,9 @@ static int kvm_ioctl_create_device(struct kvm *kvm,
> >  		return ret;
> >  	}
> >  
> > +	mutex_lock(&kvm->devices_lock);
> >  	list_add(&dev->vm_node, &kvm->devices);
> > +	mutex_unlock(&kvm->devices_lock);
> 
> This is not enough for the VFIO test to be robust.  I think it's easier
> to move ops->create entirely under the kvm_lock.  This requires some
> auditing of the create functions, but many of them (e.g.
> kvm_vgic_create) already use that lock.

Duh, I fell into the classic case of just wanting to fix our needs
instead of considering what this actually does.  Apologies.

> 
> The only complex case is xics.  It takes kvm_lock only for part of the
> initialization, and the call to xics_debugfs_init would be better moved
> outside kvm_lock.  For that we can add an ops->init(dev) function that
> is called outside kvm_lock and cannot fail.
> 
> For the others:
> 
> - vgic creation is already entirely under kvm_lock, only kvm_vgic_create
> needs adjustment
> 
> - mpic uses kvm_set_irq_routing, which is okay because irq_lock nests
> inside kvm_lock
> 
> - flic_create is trivial would have a multiple-creation bug fixed
> 
> - likewise for kvm_vfio_create
> 
> Would you like to do this?
> 
I'll have a look and see what I can put together.

Our ITS (just merged) currently suggests a somewhat broken ABI to
userspace, so would it be acceptible to implement the ITS stuff with a
similarly broken approach (ie. without any locks/synhronization) that
the VFIO device takes currently and merge that more or less immediately
and fix it together with the rest afterwards?

(By afterwards, I will prioritize this).

-Christoffer

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] KVM: Synchronize accesses to the kvm->devices list
  2016-08-03 11:24   ` Paolo Bonzini
  2016-08-03 11:51     ` Christoffer Dall
@ 2016-08-08 19:40     ` Christoffer Dall
  2016-08-08 22:18       ` Paolo Bonzini
  1 sibling, 1 reply; 6+ messages in thread
From: Christoffer Dall @ 2016-08-08 19:40 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, Andre Przywara, Radim Krčmář, Alexander Graf

On Wed, Aug 03, 2016 at 01:24:41PM +0200, Paolo Bonzini wrote:
> 
> 
> On 03/08/2016 12:39, Christoffer Dall wrote:
> > The list of KVM devices is not currently protected with any lock.  As
> > far as I can see nothing prevents multiple threads from creating devices
> > on the KVM fd simultaenously, potentially leading to corruption.
> > Further, we already have VFIO looping over this list without any form of
> > synchronization, and we are about to add similar functionality for the
> > ITS on arm64.
> > 
> > Note that the traversal in kvm_destroy_devices does not appear to be a
> > problem because only the last reference to the KVM struct can actually
> > perform this action.
> > 
> > I use a mutex to protect the list, since the arm64 ITS code is going to
> > need to iterate over the devices (to find all ITSes) and register KVM IO
> > devices for each one, which in turn grabs another mutex and kmallocs
> > stuff.
> > 
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > ---
> > Changes since v1:
> >   - My RCU knowledge was too rusty and was both incorrectly implemented
> >     and would not address the arm64 need.  Just use a mutex instead.
> > 
> >  include/linux/kvm_host.h |  1 +
> >  virt/kvm/kvm_main.c      |  8 ++++++++
> >  virt/kvm/vfio.c          | 11 +++++++++--
> >  3 files changed, 18 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index aafd702..de32196 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -421,6 +421,7 @@ struct kvm {
> >  	long mmu_notifier_count;
> >  #endif
> >  	long tlbs_dirty;
> > +	struct mutex devices_lock; /* protects the devices list */
> >  	struct list_head devices;
> >  	struct dentry *debugfs_dentry;
> >  	struct kvm_stat_data **debugfs_stat_data;
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 61b31a5..33e0579 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -616,6 +616,7 @@ static struct kvm *kvm_create_vm(unsigned long type)
> >  	mutex_init(&kvm->irq_lock);
> >  	mutex_init(&kvm->slots_lock);
> >  	atomic_set(&kvm->users_count, 1);
> > +	mutex_init(&kvm->devices_lock);
> >  	INIT_LIST_HEAD(&kvm->devices);
> >  
> >  	r = kvm_arch_init_vm(kvm, type);
> > @@ -694,6 +695,11 @@ static void kvm_destroy_devices(struct kvm *kvm)
> >  {
> >  	struct kvm_device *dev, *tmp;
> >  
> > +	/*
> > +	 * We do not need to take the devices_lock here, because nobody else
> > +	 * has a reference to the struct kvm at this point and therefore
> > +	 * cannot access the devices list anyhow.
> > +	 */
> >  	list_for_each_entry_safe(dev, tmp, &kvm->devices, vm_node) {
> >  		list_del(&dev->vm_node);
> >  		dev->ops->destroy(dev);
> > @@ -2842,7 +2848,9 @@ static int kvm_ioctl_create_device(struct kvm *kvm,
> >  		return ret;
> >  	}
> >  
> > +	mutex_lock(&kvm->devices_lock);
> >  	list_add(&dev->vm_node, &kvm->devices);
> > +	mutex_unlock(&kvm->devices_lock);
> 
> This is not enough for the VFIO test to be robust.  I think it's easier
> to move ops->create entirely under the kvm_lock.  This requires some
> auditing of the create functions, but many of them (e.g.
> kvm_vgic_create) already use that lock.
> 
> The only complex case is xics.  It takes kvm_lock only for part of the
> initialization, and the call to xics_debugfs_init would be better moved
> outside kvm_lock.  For that we can add an ops->init(dev) function that
> is called outside kvm_lock and cannot fail.
> 
Why do we need to move xics_debugfs_init outside of kvm->lock()? Merely
because it could sleep forever and we don't want that to block
everything or does it loop around and deadlock somehow that I cannot
easily see?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] KVM: Synchronize accesses to the kvm->devices list
  2016-08-08 19:40     ` Christoffer Dall
@ 2016-08-08 22:18       ` Paolo Bonzini
  0 siblings, 0 replies; 6+ messages in thread
From: Paolo Bonzini @ 2016-08-08 22:18 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: kvm, Andre Przywara, Radim Krčmář, Alexander Graf



On 08/08/2016 21:40, Christoffer Dall wrote:
> Why do we need to move xics_debugfs_init outside of kvm->lock()? Merely
> because it could sleep forever and we don't want that to block
> everything or does it loop around and deadlock somehow that I cannot
> easily see?

Just to keep the lock hierarchy as simple (and flat) as possible.  It's
not hard to do.

Paolo

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-08-08 22:18 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-08-03 10:15 [PATCH] KVM: Synchronize accesses to the kvm->devices list Christoffer Dall
2016-08-03 10:39 ` [PATCH v2] " Christoffer Dall
2016-08-03 11:24   ` Paolo Bonzini
2016-08-03 11:51     ` Christoffer Dall
2016-08-08 19:40     ` Christoffer Dall
2016-08-08 22:18       ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox