* [PATCH 1/5] Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
2009-07-13 9:12 [PATCH 0/5][RFC] more fine grained locking for IRQ injection Gleb Natapov
@ 2009-07-13 9:12 ` Gleb Natapov
2009-07-13 14:29 ` Gregory Haskins
2009-07-13 9:12 ` [PATCH 2/5] Move irq routing to its own locking Gleb Natapov
` (4 subsequent siblings)
5 siblings, 1 reply; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 9:12 UTC (permalink / raw)
To: kvm; +Cc: avi, mtosatti
It is already protected by kvm->lock on device assignment path. Just
take the same lock in the PIT code.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
---
arch/x86/kvm/i8254.c | 2 ++
virt/kvm/irq_comm.c | 8 ++++----
2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index 05e00a8..e1b9016 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -596,7 +596,9 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
if (!pit)
return NULL;
+ mutex_lock(&kvm->lock);
pit->irq_source_id = kvm_request_irq_source_id(kvm);
+ mutex_unlock(&kvm->lock);
if (pit->irq_source_id < 0) {
kfree(pit);
return NULL;
diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
index 6c57e46..ce8fcd3 100644
--- a/virt/kvm/irq_comm.c
+++ b/virt/kvm/irq_comm.c
@@ -210,7 +210,8 @@ int kvm_request_irq_source_id(struct kvm *kvm)
unsigned long *bitmap = &kvm->arch.irq_sources_bitmap;
int irq_source_id;
- mutex_lock(&kvm->irq_lock);
+ WARN_ON(!mutex_is_locked(&kvm->lock));
+
irq_source_id = find_first_zero_bit(bitmap,
sizeof(kvm->arch.irq_sources_bitmap));
@@ -221,7 +222,6 @@ int kvm_request_irq_source_id(struct kvm *kvm)
ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
set_bit(irq_source_id, bitmap);
- mutex_unlock(&kvm->irq_lock);
return irq_source_id;
}
@@ -230,9 +230,10 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
{
int i;
+ /* during vm destruction this function is called without locking */
+ WARN_ON(!mutex_is_locked(&kvm->lock) && atomic_read(&kvm->users_count));
ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
- mutex_lock(&kvm->irq_lock);
if (irq_source_id < 0 ||
irq_source_id >= sizeof(kvm->arch.irq_sources_bitmap)) {
printk(KERN_ERR "kvm: IRQ source ID out of range!\n");
@@ -241,7 +242,6 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
for (i = 0; i < KVM_IOAPIC_NUM_PINS; i++)
clear_bit(irq_source_id, &kvm->arch.irq_states[i]);
clear_bit(irq_source_id, &kvm->arch.irq_sources_bitmap);
- mutex_unlock(&kvm->irq_lock);
}
void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
--
1.6.2.1
^ permalink raw reply related [flat|nested] 34+ messages in thread* Re: [PATCH 1/5] Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
2009-07-13 9:12 ` [PATCH 1/5] Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock Gleb Natapov
@ 2009-07-13 14:29 ` Gregory Haskins
2009-07-13 14:39 ` Gleb Natapov
0 siblings, 1 reply; 34+ messages in thread
From: Gregory Haskins @ 2009-07-13 14:29 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, avi, mtosatti
[-- Attachment #1: Type: text/plain, Size: 2676 bytes --]
Gleb Natapov wrote:
> It is already protected by kvm->lock on device assignment path. Just
> take the same lock in the PIT code.
>
> Signed-off-by: Gleb Natapov <gleb@redhat.com>
> ---
> arch/x86/kvm/i8254.c | 2 ++
> virt/kvm/irq_comm.c | 8 ++++----
> 2 files changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
> index 05e00a8..e1b9016 100644
> --- a/arch/x86/kvm/i8254.c
> +++ b/arch/x86/kvm/i8254.c
> @@ -596,7 +596,9 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
> if (!pit)
> return NULL;
>
> + mutex_lock(&kvm->lock);
> pit->irq_source_id = kvm_request_irq_source_id(kvm);
> + mutex_unlock(&kvm->lock);
> if (pit->irq_source_id < 0) {
> kfree(pit);
> return NULL;
> diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> index 6c57e46..ce8fcd3 100644
> --- a/virt/kvm/irq_comm.c
> +++ b/virt/kvm/irq_comm.c
> @@ -210,7 +210,8 @@ int kvm_request_irq_source_id(struct kvm *kvm)
> unsigned long *bitmap = &kvm->arch.irq_sources_bitmap;
> int irq_source_id;
>
> - mutex_lock(&kvm->irq_lock);
> + WARN_ON(!mutex_is_locked(&kvm->lock));
>
Shouldn't this be fatal? (e.g. BUG_ON). I know the usage between
BUG/WARN is controversial, but it seems to me that something is
completely broken if you expect it to be locked and its not. Might as
well fail the system, IMO.
Regards,
-Greg
> +
> irq_source_id = find_first_zero_bit(bitmap,
> sizeof(kvm->arch.irq_sources_bitmap));
>
> @@ -221,7 +222,6 @@ int kvm_request_irq_source_id(struct kvm *kvm)
>
> ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
> set_bit(irq_source_id, bitmap);
> - mutex_unlock(&kvm->irq_lock);
>
> return irq_source_id;
> }
> @@ -230,9 +230,10 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> {
> int i;
>
> + /* during vm destruction this function is called without locking */
> + WARN_ON(!mutex_is_locked(&kvm->lock) && atomic_read(&kvm->users_count));
> ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
>
> - mutex_lock(&kvm->irq_lock);
> if (irq_source_id < 0 ||
> irq_source_id >= sizeof(kvm->arch.irq_sources_bitmap)) {
> printk(KERN_ERR "kvm: IRQ source ID out of range!\n");
> @@ -241,7 +242,6 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> for (i = 0; i < KVM_IOAPIC_NUM_PINS; i++)
> clear_bit(irq_source_id, &kvm->arch.irq_states[i]);
> clear_bit(irq_source_id, &kvm->arch.irq_sources_bitmap);
> - mutex_unlock(&kvm->irq_lock);
> }
>
> void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 266 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread* Re: [PATCH 1/5] Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
2009-07-13 14:29 ` Gregory Haskins
@ 2009-07-13 14:39 ` Gleb Natapov
2009-07-13 14:55 ` Michael S. Tsirkin
2009-07-13 15:03 ` Gregory Haskins
0 siblings, 2 replies; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 14:39 UTC (permalink / raw)
To: Gregory Haskins; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 10:29:02AM -0400, Gregory Haskins wrote:
> Gleb Natapov wrote:
> > It is already protected by kvm->lock on device assignment path. Just
> > take the same lock in the PIT code.
> >
> > Signed-off-by: Gleb Natapov <gleb@redhat.com>
> > ---
> > arch/x86/kvm/i8254.c | 2 ++
> > virt/kvm/irq_comm.c | 8 ++++----
> > 2 files changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
> > index 05e00a8..e1b9016 100644
> > --- a/arch/x86/kvm/i8254.c
> > +++ b/arch/x86/kvm/i8254.c
> > @@ -596,7 +596,9 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
> > if (!pit)
> > return NULL;
> >
> > + mutex_lock(&kvm->lock);
> > pit->irq_source_id = kvm_request_irq_source_id(kvm);
> > + mutex_unlock(&kvm->lock);
> > if (pit->irq_source_id < 0) {
> > kfree(pit);
> > return NULL;
> > diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> > index 6c57e46..ce8fcd3 100644
> > --- a/virt/kvm/irq_comm.c
> > +++ b/virt/kvm/irq_comm.c
> > @@ -210,7 +210,8 @@ int kvm_request_irq_source_id(struct kvm *kvm)
> > unsigned long *bitmap = &kvm->arch.irq_sources_bitmap;
> > int irq_source_id;
> >
> > - mutex_lock(&kvm->irq_lock);
> > + WARN_ON(!mutex_is_locked(&kvm->lock));
> >
>
> Shouldn't this be fatal? (e.g. BUG_ON). I know the usage between
> BUG/WARN is controversial, but it seems to me that something is
> completely broken if you expect it to be locked and its not. Might as
> well fail the system, IMO.
>
Well I don't really care but we have WARN_ON() in the code currently.
Besides the chances are good that even without locking around this
function nothing will break, so why kill host kernel?
> Regards,
> -Greg
>
> > +
> > irq_source_id = find_first_zero_bit(bitmap,
> > sizeof(kvm->arch.irq_sources_bitmap));
> >
> > @@ -221,7 +222,6 @@ int kvm_request_irq_source_id(struct kvm *kvm)
> >
> > ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
> > set_bit(irq_source_id, bitmap);
> > - mutex_unlock(&kvm->irq_lock);
> >
> > return irq_source_id;
> > }
> > @@ -230,9 +230,10 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > {
> > int i;
> >
> > + /* during vm destruction this function is called without locking */
> > + WARN_ON(!mutex_is_locked(&kvm->lock) && atomic_read(&kvm->users_count));
> > ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
> >
> > - mutex_lock(&kvm->irq_lock);
> > if (irq_source_id < 0 ||
> > irq_source_id >= sizeof(kvm->arch.irq_sources_bitmap)) {
> > printk(KERN_ERR "kvm: IRQ source ID out of range!\n");
> > @@ -241,7 +242,6 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > for (i = 0; i < KVM_IOAPIC_NUM_PINS; i++)
> > clear_bit(irq_source_id, &kvm->arch.irq_states[i]);
> > clear_bit(irq_source_id, &kvm->arch.irq_sources_bitmap);
> > - mutex_unlock(&kvm->irq_lock);
> > }
> >
> > void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
> >
>
>
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread* Re: [PATCH 1/5] Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
2009-07-13 14:39 ` Gleb Natapov
@ 2009-07-13 14:55 ` Michael S. Tsirkin
2009-07-13 15:01 ` Gleb Natapov
2009-07-13 15:03 ` Gregory Haskins
1 sibling, 1 reply; 34+ messages in thread
From: Michael S. Tsirkin @ 2009-07-13 14:55 UTC (permalink / raw)
To: Gleb Natapov; +Cc: Gregory Haskins, kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 05:39:41PM +0300, Gleb Natapov wrote:
> On Mon, Jul 13, 2009 at 10:29:02AM -0400, Gregory Haskins wrote:
> > Gleb Natapov wrote:
> > > It is already protected by kvm->lock on device assignment path. Just
> > > take the same lock in the PIT code.
> > >
> > > Signed-off-by: Gleb Natapov <gleb@redhat.com>
> > > ---
> > > arch/x86/kvm/i8254.c | 2 ++
> > > virt/kvm/irq_comm.c | 8 ++++----
> > > 2 files changed, 6 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
> > > index 05e00a8..e1b9016 100644
> > > --- a/arch/x86/kvm/i8254.c
> > > +++ b/arch/x86/kvm/i8254.c
> > > @@ -596,7 +596,9 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
> > > if (!pit)
> > > return NULL;
> > >
> > > + mutex_lock(&kvm->lock);
> > > pit->irq_source_id = kvm_request_irq_source_id(kvm);
> > > + mutex_unlock(&kvm->lock);
> > > if (pit->irq_source_id < 0) {
> > > kfree(pit);
> > > return NULL;
> > > diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> > > index 6c57e46..ce8fcd3 100644
> > > --- a/virt/kvm/irq_comm.c
> > > +++ b/virt/kvm/irq_comm.c
> > > @@ -210,7 +210,8 @@ int kvm_request_irq_source_id(struct kvm *kvm)
> > > unsigned long *bitmap = &kvm->arch.irq_sources_bitmap;
> > > int irq_source_id;
> > >
> > > - mutex_lock(&kvm->irq_lock);
> > > + WARN_ON(!mutex_is_locked(&kvm->lock));
> > >
> >
> > Shouldn't this be fatal? (e.g. BUG_ON). I know the usage between
> > BUG/WARN is controversial, but it seems to me that something is
> > completely broken if you expect it to be locked and its not. Might as
> > well fail the system, IMO.
> >
> Well I don't really care but we have WARN_ON() in the code currently.
> Besides the chances are good that even without locking around this
> function nothing will break, so why kill host kernel?
Yea. Might as well replace with a comment saying the function expects
the mutex to be locked.
> > Regards,
> > -Greg
> >
> > > +
> > > irq_source_id = find_first_zero_bit(bitmap,
> > > sizeof(kvm->arch.irq_sources_bitmap));
> > >
> > > @@ -221,7 +222,6 @@ int kvm_request_irq_source_id(struct kvm *kvm)
> > >
> > > ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
> > > set_bit(irq_source_id, bitmap);
> > > - mutex_unlock(&kvm->irq_lock);
> > >
> > > return irq_source_id;
> > > }
> > > @@ -230,9 +230,10 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > > {
> > > int i;
> > >
> > > + /* during vm destruction this function is called without locking */
> > > + WARN_ON(!mutex_is_locked(&kvm->lock) && atomic_read(&kvm->users_count));
> > > ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
> > >
> > > - mutex_lock(&kvm->irq_lock);
> > > if (irq_source_id < 0 ||
> > > irq_source_id >= sizeof(kvm->arch.irq_sources_bitmap)) {
> > > printk(KERN_ERR "kvm: IRQ source ID out of range!\n");
> > > @@ -241,7 +242,6 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > > for (i = 0; i < KVM_IOAPIC_NUM_PINS; i++)
> > > clear_bit(irq_source_id, &kvm->arch.irq_states[i]);
> > > clear_bit(irq_source_id, &kvm->arch.irq_sources_bitmap);
> > > - mutex_unlock(&kvm->irq_lock);
> > > }
> > >
> > > void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
> > >
> >
> >
>
>
>
> --
> Gleb.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 34+ messages in thread* Re: [PATCH 1/5] Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
2009-07-13 14:55 ` Michael S. Tsirkin
@ 2009-07-13 15:01 ` Gleb Natapov
0 siblings, 0 replies; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 15:01 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: Gregory Haskins, kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 05:55:09PM +0300, Michael S. Tsirkin wrote:
> On Mon, Jul 13, 2009 at 05:39:41PM +0300, Gleb Natapov wrote:
> > On Mon, Jul 13, 2009 at 10:29:02AM -0400, Gregory Haskins wrote:
> > > Gleb Natapov wrote:
> > > > It is already protected by kvm->lock on device assignment path. Just
> > > > take the same lock in the PIT code.
> > > >
> > > > Signed-off-by: Gleb Natapov <gleb@redhat.com>
> > > > ---
> > > > arch/x86/kvm/i8254.c | 2 ++
> > > > virt/kvm/irq_comm.c | 8 ++++----
> > > > 2 files changed, 6 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
> > > > index 05e00a8..e1b9016 100644
> > > > --- a/arch/x86/kvm/i8254.c
> > > > +++ b/arch/x86/kvm/i8254.c
> > > > @@ -596,7 +596,9 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
> > > > if (!pit)
> > > > return NULL;
> > > >
> > > > + mutex_lock(&kvm->lock);
> > > > pit->irq_source_id = kvm_request_irq_source_id(kvm);
> > > > + mutex_unlock(&kvm->lock);
> > > > if (pit->irq_source_id < 0) {
> > > > kfree(pit);
> > > > return NULL;
> > > > diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> > > > index 6c57e46..ce8fcd3 100644
> > > > --- a/virt/kvm/irq_comm.c
> > > > +++ b/virt/kvm/irq_comm.c
> > > > @@ -210,7 +210,8 @@ int kvm_request_irq_source_id(struct kvm *kvm)
> > > > unsigned long *bitmap = &kvm->arch.irq_sources_bitmap;
> > > > int irq_source_id;
> > > >
> > > > - mutex_lock(&kvm->irq_lock);
> > > > + WARN_ON(!mutex_is_locked(&kvm->lock));
> > > >
> > >
> > > Shouldn't this be fatal? (e.g. BUG_ON). I know the usage between
> > > BUG/WARN is controversial, but it seems to me that something is
> > > completely broken if you expect it to be locked and its not. Might as
> > > well fail the system, IMO.
> > >
> > Well I don't really care but we have WARN_ON() in the code currently.
> > Besides the chances are good that even without locking around this
> > function nothing will break, so why kill host kernel?
>
> Yea. Might as well replace with a comment saying the function expects
> the mutex to be locked.
>
No. The we will not get bug reports if there are problems. Otherwise you
can say the same about each and every WARN_ON() in the code.
> > > Regards,
> > > -Greg
> > >
> > > > +
> > > > irq_source_id = find_first_zero_bit(bitmap,
> > > > sizeof(kvm->arch.irq_sources_bitmap));
> > > >
> > > > @@ -221,7 +222,6 @@ int kvm_request_irq_source_id(struct kvm *kvm)
> > > >
> > > > ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
> > > > set_bit(irq_source_id, bitmap);
> > > > - mutex_unlock(&kvm->irq_lock);
> > > >
> > > > return irq_source_id;
> > > > }
> > > > @@ -230,9 +230,10 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > > > {
> > > > int i;
> > > >
> > > > + /* during vm destruction this function is called without locking */
> > > > + WARN_ON(!mutex_is_locked(&kvm->lock) && atomic_read(&kvm->users_count));
> > > > ASSERT(irq_source_id != KVM_USERSPACE_IRQ_SOURCE_ID);
> > > >
> > > > - mutex_lock(&kvm->irq_lock);
> > > > if (irq_source_id < 0 ||
> > > > irq_source_id >= sizeof(kvm->arch.irq_sources_bitmap)) {
> > > > printk(KERN_ERR "kvm: IRQ source ID out of range!\n");
> > > > @@ -241,7 +242,6 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > > > for (i = 0; i < KVM_IOAPIC_NUM_PINS; i++)
> > > > clear_bit(irq_source_id, &kvm->arch.irq_states[i]);
> > > > clear_bit(irq_source_id, &kvm->arch.irq_sources_bitmap);
> > > > - mutex_unlock(&kvm->irq_lock);
> > > > }
> > > >
> > > > void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
> > > >
> > >
> > >
> >
> >
> >
> > --
> > Gleb.
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 1/5] Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
2009-07-13 14:39 ` Gleb Natapov
2009-07-13 14:55 ` Michael S. Tsirkin
@ 2009-07-13 15:03 ` Gregory Haskins
2009-07-13 15:11 ` Gregory Haskins
2009-07-13 15:19 ` Gleb Natapov
1 sibling, 2 replies; 34+ messages in thread
From: Gregory Haskins @ 2009-07-13 15:03 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, avi, mtosatti
[-- Attachment #1: Type: text/plain, Size: 1466 bytes --]
Gleb Natapov wrote:
> On Mon, Jul 13, 2009 at 10:29:02AM -0400, Gregory Haskins wrote:
>
>> Gleb Natapov wrote:
>>
>>> diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
>>> index 6c57e46..ce8fcd3 100644
>>> --- a/virt/kvm/irq_comm.c
>>> +++ b/virt/kvm/irq_comm.c
>>> @@ -210,7 +210,8 @@ int kvm_request_irq_source_id(struct kvm *kvm)
>>> unsigned long *bitmap = &kvm->arch.irq_sources_bitmap;
>>> int irq_source_id;
>>>
>>> - mutex_lock(&kvm->irq_lock);
>>> + WARN_ON(!mutex_is_locked(&kvm->lock));
>>>
>>>
>> Shouldn't this be fatal? (e.g. BUG_ON). I know the usage between
>> BUG/WARN is controversial, but it seems to me that something is
>> completely broken if you expect it to be locked and its not. Might as
>> well fail the system, IMO.
>>
>>
> Well I don't really care but we have WARN_ON() in the code currently.
>
Well, that is perhaps unfortunate, but not relevant. I am not reviewing
those patches ;)
> Besides the chances are good that even without locking around this
> function nothing will break, so why kill host kernel?
>
The question to ask is: Is it legal to continue to run if the mutex is
found unlocked? If not, the offending caller should be found/fixed as
early as possible IMO, and an oops should be sufficient to do so. I
think WARN_ON tends to gets overused/abused, so lets not perpetuate it
simply because of precedence.
Kind Regards,
-Greg
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 266 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 1/5] Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
2009-07-13 15:03 ` Gregory Haskins
@ 2009-07-13 15:11 ` Gregory Haskins
2009-07-13 15:19 ` Gleb Natapov
1 sibling, 0 replies; 34+ messages in thread
From: Gregory Haskins @ 2009-07-13 15:11 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, avi, mtosatti
[-- Attachment #1: Type: text/plain, Size: 1704 bytes --]
Gregory Haskins wrote:
> Gleb Natapov wrote:
>
>> On Mon, Jul 13, 2009 at 10:29:02AM -0400, Gregory Haskins wrote:
>>
>>
>>> Gleb Natapov wrote:
>>>
>>>
>>>> diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
>>>> index 6c57e46..ce8fcd3 100644
>>>> --- a/virt/kvm/irq_comm.c
>>>> +++ b/virt/kvm/irq_comm.c
>>>> @@ -210,7 +210,8 @@ int kvm_request_irq_source_id(struct kvm *kvm)
>>>> unsigned long *bitmap = &kvm->arch.irq_sources_bitmap;
>>>> int irq_source_id;
>>>>
>>>> - mutex_lock(&kvm->irq_lock);
>>>> + WARN_ON(!mutex_is_locked(&kvm->lock));
>>>>
>>>>
>>>>
>>> Shouldn't this be fatal? (e.g. BUG_ON). I know the usage between
>>> BUG/WARN is controversial, but it seems to me that something is
>>> completely broken if you expect it to be locked and its not. Might as
>>> well fail the system, IMO.
>>>
>>>
>>>
>> Well I don't really care but we have WARN_ON() in the code currently.
>>
>>
>
> Well, that is perhaps unfortunate, but not relevant. I am not reviewing
> those patches ;)
>
>
>> Besides the chances are good that even without locking around this
>> function nothing will break, so why kill host kernel?
>>
>>
>
> The question to ask is: Is it legal to continue to run if the mutex is
> found unlocked? If not, the offending caller should be found/fixed as
> early as possible IMO, and an oops should be sufficient to do so. I
> think WARN_ON tends to gets overused/abused, so lets not perpetuate it
> simply because of precedence.
>
Err..precedent, I mean. Heh.
/me needs more coffee.
-Greg
> Kind Regards,
> -Greg
>
>
>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 266 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 1/5] Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
2009-07-13 15:03 ` Gregory Haskins
2009-07-13 15:11 ` Gregory Haskins
@ 2009-07-13 15:19 ` Gleb Natapov
1 sibling, 0 replies; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 15:19 UTC (permalink / raw)
To: Gregory Haskins; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 11:03:56AM -0400, Gregory Haskins wrote:
> Gleb Natapov wrote:
> > On Mon, Jul 13, 2009 at 10:29:02AM -0400, Gregory Haskins wrote:
> >
> >> Gleb Natapov wrote:
> >>
> >>> diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> >>> index 6c57e46..ce8fcd3 100644
> >>> --- a/virt/kvm/irq_comm.c
> >>> +++ b/virt/kvm/irq_comm.c
> >>> @@ -210,7 +210,8 @@ int kvm_request_irq_source_id(struct kvm *kvm)
> >>> unsigned long *bitmap = &kvm->arch.irq_sources_bitmap;
> >>> int irq_source_id;
> >>>
> >>> - mutex_lock(&kvm->irq_lock);
> >>> + WARN_ON(!mutex_is_locked(&kvm->lock));
> >>>
> >>>
> >> Shouldn't this be fatal? (e.g. BUG_ON). I know the usage between
> >> BUG/WARN is controversial, but it seems to me that something is
> >> completely broken if you expect it to be locked and its not. Might as
> >> well fail the system, IMO.
> >>
> >>
> > Well I don't really care but we have WARN_ON() in the code currently.
> >
>
> Well, that is perhaps unfortunate, but not relevant. I am not reviewing
> those patches ;)
>
> > Besides the chances are good that even without locking around this
> > function nothing will break, so why kill host kernel?
> >
>
> The question to ask is: Is it legal to continue to run if the mutex is
> found unlocked? If not, the offending caller should be found/fixed as
> early as possible IMO, and an oops should be sufficient to do so. I
> think WARN_ON tends to gets overused/abused, so lets not perpetuate it
> simply because of precedence.
>
I will have to end this particular thread about WARN_ON by stating
that Avi told me to put it there. I'll let him decide.
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH 2/5] Move irq routing to its own locking.
2009-07-13 9:12 [PATCH 0/5][RFC] more fine grained locking for IRQ injection Gleb Natapov
2009-07-13 9:12 ` [PATCH 1/5] Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock Gleb Natapov
@ 2009-07-13 9:12 ` Gleb Natapov
2009-07-13 9:12 ` [PATCH 3/5] Move irq notifiers lists " Gleb Natapov
` (3 subsequent siblings)
5 siblings, 0 replies; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 9:12 UTC (permalink / raw)
To: kvm; +Cc: avi, mtosatti
Signed-off-by: Gleb Natapov <gleb@redhat.com>
---
include/linux/kvm_host.h | 1 +
virt/kvm/irq_comm.c | 5 ++---
virt/kvm/kvm_main.c | 1 +
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f795f25..3f2a4fc 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -162,6 +162,7 @@ struct kvm {
struct mutex irq_lock;
#ifdef CONFIG_HAVE_KVM_IRQCHIP
struct kvm_kernel_irq_routing_entry *irq_routing;
+ spinlock_t irq_routing_lock;
struct hlist_head mask_notifier_list;
struct hlist_head irq_ack_notifier_list;
#endif
diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
index ce8fcd3..3dbba41 100644
--- a/virt/kvm/irq_comm.c
+++ b/virt/kvm/irq_comm.c
@@ -350,11 +350,10 @@ int kvm_set_irq_routing(struct kvm *kvm,
++ue;
}
- mutex_lock(&kvm->irq_lock);
+ spin_lock(&kvm->irq_routing_lock);
old = kvm->irq_routing;
rcu_assign_pointer(kvm->irq_routing, new);
- mutex_unlock(&kvm->irq_lock);
-
+ spin_unlock(&kvm->irq_routing_lock);
synchronize_rcu();
r = 0;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 53d9b70..4e31634 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -945,6 +945,7 @@ static struct kvm *kvm_create_vm(void)
if (IS_ERR(kvm))
goto out;
#ifdef CONFIG_HAVE_KVM_IRQCHIP
+ spin_lock_init(&kvm->irq_routing_lock);
INIT_HLIST_HEAD(&kvm->mask_notifier_list);
INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);
#endif
--
1.6.2.1
^ permalink raw reply related [flat|nested] 34+ messages in thread* [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 9:12 [PATCH 0/5][RFC] more fine grained locking for IRQ injection Gleb Natapov
2009-07-13 9:12 ` [PATCH 1/5] Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock Gleb Natapov
2009-07-13 9:12 ` [PATCH 2/5] Move irq routing to its own locking Gleb Natapov
@ 2009-07-13 9:12 ` Gleb Natapov
2009-07-13 11:45 ` Michael S. Tsirkin
2009-07-13 9:12 ` [PATCH 4/5] Move IO APIC to its own lock Gleb Natapov
` (2 subsequent siblings)
5 siblings, 1 reply; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 9:12 UTC (permalink / raw)
To: kvm; +Cc: avi, mtosatti
Signed-off-by: Gleb Natapov <gleb@redhat.com>
---
include/linux/kvm_host.h | 1 +
virt/kvm/irq_comm.c | 16 ++++++++--------
virt/kvm/kvm_main.c | 1 +
3 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 3f2a4fc..12f4ee2 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -165,6 +165,7 @@ struct kvm {
spinlock_t irq_routing_lock;
struct hlist_head mask_notifier_list;
struct hlist_head irq_ack_notifier_list;
+ spinlock_t irq_notifier_list_lock;
#endif
#ifdef KVM_ARCH_WANT_MMU_NOTIFIER
diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
index 3dbba41..59c1cde 100644
--- a/virt/kvm/irq_comm.c
+++ b/virt/kvm/irq_comm.c
@@ -191,17 +191,17 @@ void kvm_notify_acked_irq(struct kvm *kvm, unsigned irqchip, unsigned pin)
void kvm_register_irq_ack_notifier(struct kvm *kvm,
struct kvm_irq_ack_notifier *kian)
{
- mutex_lock(&kvm->irq_lock);
+ spin_lock(&kvm->irq_notifier_list_lock);
hlist_add_head_rcu(&kian->link, &kvm->irq_ack_notifier_list);
- mutex_unlock(&kvm->irq_lock);
+ spin_unlock(&kvm->irq_notifier_list_lock);
}
void kvm_unregister_irq_ack_notifier(struct kvm *kvm,
struct kvm_irq_ack_notifier *kian)
{
- mutex_lock(&kvm->irq_lock);
+ spin_lock(&kvm->irq_notifier_list_lock);
hlist_del_init_rcu(&kian->link);
- mutex_unlock(&kvm->irq_lock);
+ spin_unlock(&kvm->irq_notifier_list_lock);
synchronize_rcu();
}
@@ -247,18 +247,18 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
struct kvm_irq_mask_notifier *kimn)
{
- mutex_lock(&kvm->irq_lock);
+ spin_lock(&kvm->irq_notifier_list_lock);
kimn->irq = irq;
hlist_add_head_rcu(&kimn->link, &kvm->mask_notifier_list);
- mutex_unlock(&kvm->irq_lock);
+ spin_unlock(&kvm->irq_notifier_list_lock);
}
void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq,
struct kvm_irq_mask_notifier *kimn)
{
- mutex_lock(&kvm->irq_lock);
+ spin_lock(&kvm->irq_notifier_list_lock);
hlist_del_rcu(&kimn->link);
- mutex_unlock(&kvm->irq_lock);
+ spin_unlock(&kvm->irq_notifier_list_lock);
synchronize_rcu();
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 4e31634..018bde8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -948,6 +948,7 @@ static struct kvm *kvm_create_vm(void)
spin_lock_init(&kvm->irq_routing_lock);
INIT_HLIST_HEAD(&kvm->mask_notifier_list);
INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);
+ spin_lock_init(&kvm->irq_notifier_list_lock);
#endif
#ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
--
1.6.2.1
^ permalink raw reply related [flat|nested] 34+ messages in thread* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 9:12 ` [PATCH 3/5] Move irq notifiers lists " Gleb Natapov
@ 2009-07-13 11:45 ` Michael S. Tsirkin
2009-07-13 11:48 ` Gleb Natapov
0 siblings, 1 reply; 34+ messages in thread
From: Michael S. Tsirkin @ 2009-07-13 11:45 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 12:12:33PM +0300, Gleb Natapov wrote:
>
> Signed-off-by: Gleb Natapov <gleb@redhat.com>
This one is probably better off left as is,
with RCU in place list modifications are slow anyway.
> ---
> include/linux/kvm_host.h | 1 +
> virt/kvm/irq_comm.c | 16 ++++++++--------
> virt/kvm/kvm_main.c | 1 +
> 3 files changed, 10 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 3f2a4fc..12f4ee2 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -165,6 +165,7 @@ struct kvm {
> spinlock_t irq_routing_lock;
> struct hlist_head mask_notifier_list;
> struct hlist_head irq_ack_notifier_list;
> + spinlock_t irq_notifier_list_lock;
> #endif
>
> #ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> index 3dbba41..59c1cde 100644
> --- a/virt/kvm/irq_comm.c
> +++ b/virt/kvm/irq_comm.c
> @@ -191,17 +191,17 @@ void kvm_notify_acked_irq(struct kvm *kvm, unsigned irqchip, unsigned pin)
> void kvm_register_irq_ack_notifier(struct kvm *kvm,
> struct kvm_irq_ack_notifier *kian)
> {
> - mutex_lock(&kvm->irq_lock);
> + spin_lock(&kvm->irq_notifier_list_lock);
> hlist_add_head_rcu(&kian->link, &kvm->irq_ack_notifier_list);
> - mutex_unlock(&kvm->irq_lock);
> + spin_unlock(&kvm->irq_notifier_list_lock);
> }
>
> void kvm_unregister_irq_ack_notifier(struct kvm *kvm,
> struct kvm_irq_ack_notifier *kian)
> {
> - mutex_lock(&kvm->irq_lock);
> + spin_lock(&kvm->irq_notifier_list_lock);
> hlist_del_init_rcu(&kian->link);
> - mutex_unlock(&kvm->irq_lock);
> + spin_unlock(&kvm->irq_notifier_list_lock);
> synchronize_rcu();
> }
>
> @@ -247,18 +247,18 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
> struct kvm_irq_mask_notifier *kimn)
> {
> - mutex_lock(&kvm->irq_lock);
> + spin_lock(&kvm->irq_notifier_list_lock);
> kimn->irq = irq;
> hlist_add_head_rcu(&kimn->link, &kvm->mask_notifier_list);
> - mutex_unlock(&kvm->irq_lock);
> + spin_unlock(&kvm->irq_notifier_list_lock);
> }
>
> void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq,
> struct kvm_irq_mask_notifier *kimn)
> {
> - mutex_lock(&kvm->irq_lock);
> + spin_lock(&kvm->irq_notifier_list_lock);
> hlist_del_rcu(&kimn->link);
> - mutex_unlock(&kvm->irq_lock);
> + spin_unlock(&kvm->irq_notifier_list_lock);
> synchronize_rcu();
> }
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 4e31634..018bde8 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -948,6 +948,7 @@ static struct kvm *kvm_create_vm(void)
> spin_lock_init(&kvm->irq_routing_lock);
> INIT_HLIST_HEAD(&kvm->mask_notifier_list);
> INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);
> + spin_lock_init(&kvm->irq_notifier_list_lock);
> #endif
>
> #ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
> --
> 1.6.2.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 34+ messages in thread* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 11:45 ` Michael S. Tsirkin
@ 2009-07-13 11:48 ` Gleb Natapov
2009-07-13 14:23 ` Michael S. Tsirkin
0 siblings, 1 reply; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 11:48 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 02:45:51PM +0300, Michael S. Tsirkin wrote:
> On Mon, Jul 13, 2009 at 12:12:33PM +0300, Gleb Natapov wrote:
> >
> > Signed-off-by: Gleb Natapov <gleb@redhat.com>
>
> This one is probably better off left as is,
What do you mean "as is"?
> with RCU in place list modifications are slow anyway.
>
> > ---
> > include/linux/kvm_host.h | 1 +
> > virt/kvm/irq_comm.c | 16 ++++++++--------
> > virt/kvm/kvm_main.c | 1 +
> > 3 files changed, 10 insertions(+), 8 deletions(-)
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 3f2a4fc..12f4ee2 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -165,6 +165,7 @@ struct kvm {
> > spinlock_t irq_routing_lock;
> > struct hlist_head mask_notifier_list;
> > struct hlist_head irq_ack_notifier_list;
> > + spinlock_t irq_notifier_list_lock;
> > #endif
> >
> > #ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> > diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> > index 3dbba41..59c1cde 100644
> > --- a/virt/kvm/irq_comm.c
> > +++ b/virt/kvm/irq_comm.c
> > @@ -191,17 +191,17 @@ void kvm_notify_acked_irq(struct kvm *kvm, unsigned irqchip, unsigned pin)
> > void kvm_register_irq_ack_notifier(struct kvm *kvm,
> > struct kvm_irq_ack_notifier *kian)
> > {
> > - mutex_lock(&kvm->irq_lock);
> > + spin_lock(&kvm->irq_notifier_list_lock);
> > hlist_add_head_rcu(&kian->link, &kvm->irq_ack_notifier_list);
> > - mutex_unlock(&kvm->irq_lock);
> > + spin_unlock(&kvm->irq_notifier_list_lock);
> > }
> >
> > void kvm_unregister_irq_ack_notifier(struct kvm *kvm,
> > struct kvm_irq_ack_notifier *kian)
> > {
> > - mutex_lock(&kvm->irq_lock);
> > + spin_lock(&kvm->irq_notifier_list_lock);
> > hlist_del_init_rcu(&kian->link);
> > - mutex_unlock(&kvm->irq_lock);
> > + spin_unlock(&kvm->irq_notifier_list_lock);
> > synchronize_rcu();
> > }
> >
> > @@ -247,18 +247,18 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
> > struct kvm_irq_mask_notifier *kimn)
> > {
> > - mutex_lock(&kvm->irq_lock);
> > + spin_lock(&kvm->irq_notifier_list_lock);
> > kimn->irq = irq;
> > hlist_add_head_rcu(&kimn->link, &kvm->mask_notifier_list);
> > - mutex_unlock(&kvm->irq_lock);
> > + spin_unlock(&kvm->irq_notifier_list_lock);
> > }
> >
> > void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq,
> > struct kvm_irq_mask_notifier *kimn)
> > {
> > - mutex_lock(&kvm->irq_lock);
> > + spin_lock(&kvm->irq_notifier_list_lock);
> > hlist_del_rcu(&kimn->link);
> > - mutex_unlock(&kvm->irq_lock);
> > + spin_unlock(&kvm->irq_notifier_list_lock);
> > synchronize_rcu();
> > }
> >
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 4e31634..018bde8 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -948,6 +948,7 @@ static struct kvm *kvm_create_vm(void)
> > spin_lock_init(&kvm->irq_routing_lock);
> > INIT_HLIST_HEAD(&kvm->mask_notifier_list);
> > INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);
> > + spin_lock_init(&kvm->irq_notifier_list_lock);
> > #endif
> >
> > #ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
> > --
> > 1.6.2.1
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 11:48 ` Gleb Natapov
@ 2009-07-13 14:23 ` Michael S. Tsirkin
2009-07-13 14:37 ` Gleb Natapov
0 siblings, 1 reply; 34+ messages in thread
From: Michael S. Tsirkin @ 2009-07-13 14:23 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 02:48:44PM +0300, Gleb Natapov wrote:
> On Mon, Jul 13, 2009 at 02:45:51PM +0300, Michael S. Tsirkin wrote:
> > On Mon, Jul 13, 2009 at 12:12:33PM +0300, Gleb Natapov wrote:
> > >
> > > Signed-off-by: Gleb Natapov <gleb@redhat.com>
> >
> > This one is probably better off left as is,
> What do you mean "as is"?
This is a slow operation. It seems that we could use irq_lock or switch
to slot lock or kvm lock here. Why do we need another one?
> > with RCU in place list modifications are slow anyway.
> >
> > > ---
> > > include/linux/kvm_host.h | 1 +
> > > virt/kvm/irq_comm.c | 16 ++++++++--------
> > > virt/kvm/kvm_main.c | 1 +
> > > 3 files changed, 10 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > index 3f2a4fc..12f4ee2 100644
> > > --- a/include/linux/kvm_host.h
> > > +++ b/include/linux/kvm_host.h
> > > @@ -165,6 +165,7 @@ struct kvm {
> > > spinlock_t irq_routing_lock;
> > > struct hlist_head mask_notifier_list;
> > > struct hlist_head irq_ack_notifier_list;
> > > + spinlock_t irq_notifier_list_lock;
> > > #endif
> > >
> > > #ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> > > diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> > > index 3dbba41..59c1cde 100644
> > > --- a/virt/kvm/irq_comm.c
> > > +++ b/virt/kvm/irq_comm.c
> > > @@ -191,17 +191,17 @@ void kvm_notify_acked_irq(struct kvm *kvm, unsigned irqchip, unsigned pin)
> > > void kvm_register_irq_ack_notifier(struct kvm *kvm,
> > > struct kvm_irq_ack_notifier *kian)
> > > {
> > > - mutex_lock(&kvm->irq_lock);
> > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > hlist_add_head_rcu(&kian->link, &kvm->irq_ack_notifier_list);
> > > - mutex_unlock(&kvm->irq_lock);
> > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > }
> > >
> > > void kvm_unregister_irq_ack_notifier(struct kvm *kvm,
> > > struct kvm_irq_ack_notifier *kian)
> > > {
> > > - mutex_lock(&kvm->irq_lock);
> > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > hlist_del_init_rcu(&kian->link);
> > > - mutex_unlock(&kvm->irq_lock);
> > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > synchronize_rcu();
> > > }
> > >
> > > @@ -247,18 +247,18 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > > void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
> > > struct kvm_irq_mask_notifier *kimn)
> > > {
> > > - mutex_lock(&kvm->irq_lock);
> > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > kimn->irq = irq;
> > > hlist_add_head_rcu(&kimn->link, &kvm->mask_notifier_list);
> > > - mutex_unlock(&kvm->irq_lock);
> > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > }
> > >
> > > void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq,
> > > struct kvm_irq_mask_notifier *kimn)
> > > {
> > > - mutex_lock(&kvm->irq_lock);
> > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > hlist_del_rcu(&kimn->link);
> > > - mutex_unlock(&kvm->irq_lock);
> > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > synchronize_rcu();
> > > }
> > >
> > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > index 4e31634..018bde8 100644
> > > --- a/virt/kvm/kvm_main.c
> > > +++ b/virt/kvm/kvm_main.c
> > > @@ -948,6 +948,7 @@ static struct kvm *kvm_create_vm(void)
> > > spin_lock_init(&kvm->irq_routing_lock);
> > > INIT_HLIST_HEAD(&kvm->mask_notifier_list);
> > > INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);
> > > + spin_lock_init(&kvm->irq_notifier_list_lock);
> > > #endif
> > >
> > > #ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
> > > --
> > > 1.6.2.1
> > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> Gleb.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 34+ messages in thread* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 14:23 ` Michael S. Tsirkin
@ 2009-07-13 14:37 ` Gleb Natapov
2009-07-13 14:49 ` Michael S. Tsirkin
2009-07-13 16:23 ` Marcelo Tosatti
0 siblings, 2 replies; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 14:37 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 05:23:20PM +0300, Michael S. Tsirkin wrote:
> On Mon, Jul 13, 2009 at 02:48:44PM +0300, Gleb Natapov wrote:
> > On Mon, Jul 13, 2009 at 02:45:51PM +0300, Michael S. Tsirkin wrote:
> > > On Mon, Jul 13, 2009 at 12:12:33PM +0300, Gleb Natapov wrote:
> > > >
> > > > Signed-off-by: Gleb Natapov <gleb@redhat.com>
> > >
> > > This one is probably better off left as is,
> > What do you mean "as is"?
>
> This is a slow operation. It seems that we could use irq_lock or switch
> to slot lock or kvm lock here. Why do we need another one?
>
irq_lock is completely removed. So either we don't remove it and use it
here (and we don't need mutex so we change it to spinlock too), or we add
another lock with the name that actually tell us what its purpose. I prefer
second option. I am not sure you can use kvm lock without deadlock, and
slot lock? How this connected to slots management?!
And this is not about speed of the operation. It is about making reader
lockless.
> > > with RCU in place list modifications are slow anyway.
> > >
> > > > ---
> > > > include/linux/kvm_host.h | 1 +
> > > > virt/kvm/irq_comm.c | 16 ++++++++--------
> > > > virt/kvm/kvm_main.c | 1 +
> > > > 3 files changed, 10 insertions(+), 8 deletions(-)
> > > >
> > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > > index 3f2a4fc..12f4ee2 100644
> > > > --- a/include/linux/kvm_host.h
> > > > +++ b/include/linux/kvm_host.h
> > > > @@ -165,6 +165,7 @@ struct kvm {
> > > > spinlock_t irq_routing_lock;
> > > > struct hlist_head mask_notifier_list;
> > > > struct hlist_head irq_ack_notifier_list;
> > > > + spinlock_t irq_notifier_list_lock;
> > > > #endif
> > > >
> > > > #ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> > > > diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> > > > index 3dbba41..59c1cde 100644
> > > > --- a/virt/kvm/irq_comm.c
> > > > +++ b/virt/kvm/irq_comm.c
> > > > @@ -191,17 +191,17 @@ void kvm_notify_acked_irq(struct kvm *kvm, unsigned irqchip, unsigned pin)
> > > > void kvm_register_irq_ack_notifier(struct kvm *kvm,
> > > > struct kvm_irq_ack_notifier *kian)
> > > > {
> > > > - mutex_lock(&kvm->irq_lock);
> > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > hlist_add_head_rcu(&kian->link, &kvm->irq_ack_notifier_list);
> > > > - mutex_unlock(&kvm->irq_lock);
> > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > }
> > > >
> > > > void kvm_unregister_irq_ack_notifier(struct kvm *kvm,
> > > > struct kvm_irq_ack_notifier *kian)
> > > > {
> > > > - mutex_lock(&kvm->irq_lock);
> > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > hlist_del_init_rcu(&kian->link);
> > > > - mutex_unlock(&kvm->irq_lock);
> > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > synchronize_rcu();
> > > > }
> > > >
> > > > @@ -247,18 +247,18 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > > > void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
> > > > struct kvm_irq_mask_notifier *kimn)
> > > > {
> > > > - mutex_lock(&kvm->irq_lock);
> > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > kimn->irq = irq;
> > > > hlist_add_head_rcu(&kimn->link, &kvm->mask_notifier_list);
> > > > - mutex_unlock(&kvm->irq_lock);
> > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > }
> > > >
> > > > void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq,
> > > > struct kvm_irq_mask_notifier *kimn)
> > > > {
> > > > - mutex_lock(&kvm->irq_lock);
> > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > hlist_del_rcu(&kimn->link);
> > > > - mutex_unlock(&kvm->irq_lock);
> > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > synchronize_rcu();
> > > > }
> > > >
> > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > > index 4e31634..018bde8 100644
> > > > --- a/virt/kvm/kvm_main.c
> > > > +++ b/virt/kvm/kvm_main.c
> > > > @@ -948,6 +948,7 @@ static struct kvm *kvm_create_vm(void)
> > > > spin_lock_init(&kvm->irq_routing_lock);
> > > > INIT_HLIST_HEAD(&kvm->mask_notifier_list);
> > > > INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);
> > > > + spin_lock_init(&kvm->irq_notifier_list_lock);
> > > > #endif
> > > >
> > > > #ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
> > > > --
> > > > 1.6.2.1
> > > >
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> > --
> > Gleb.
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 14:37 ` Gleb Natapov
@ 2009-07-13 14:49 ` Michael S. Tsirkin
2009-07-13 15:23 ` Gleb Natapov
2009-07-13 16:23 ` Marcelo Tosatti
1 sibling, 1 reply; 34+ messages in thread
From: Michael S. Tsirkin @ 2009-07-13 14:49 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 05:37:50PM +0300, Gleb Natapov wrote:
> On Mon, Jul 13, 2009 at 05:23:20PM +0300, Michael S. Tsirkin wrote:
> > On Mon, Jul 13, 2009 at 02:48:44PM +0300, Gleb Natapov wrote:
> > > On Mon, Jul 13, 2009 at 02:45:51PM +0300, Michael S. Tsirkin wrote:
> > > > On Mon, Jul 13, 2009 at 12:12:33PM +0300, Gleb Natapov wrote:
> > > > >
> > > > > Signed-off-by: Gleb Natapov <gleb@redhat.com>
> > > >
> > > > This one is probably better off left as is,
> > > What do you mean "as is"?
> >
> > This is a slow operation. It seems that we could use irq_lock or switch
> > to slot lock or kvm lock here. Why do we need another one?
> >
> irq_lock is completely removed. So either we don't remove it and use it
> here (and we don't need mutex so we change it to spinlock too), or we add
> another lock with the name that actually tell us what its purpose. I prefer
> second option. I am not sure you can use kvm lock without deadlock, and
> slot lock? How this connected to slots management?!
>
> And this is not about speed of the operation. It is about making reader
> lockless.
So, to summarize: this patch does not help speed irq injection up, the
only reason to change locking here is cosmetical. Is this a fair
summary?
> > > > with RCU in place list modifications are slow anyway.
> > > >
> > > > > ---
> > > > > include/linux/kvm_host.h | 1 +
> > > > > virt/kvm/irq_comm.c | 16 ++++++++--------
> > > > > virt/kvm/kvm_main.c | 1 +
> > > > > 3 files changed, 10 insertions(+), 8 deletions(-)
> > > > >
> > > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > > > index 3f2a4fc..12f4ee2 100644
> > > > > --- a/include/linux/kvm_host.h
> > > > > +++ b/include/linux/kvm_host.h
> > > > > @@ -165,6 +165,7 @@ struct kvm {
> > > > > spinlock_t irq_routing_lock;
> > > > > struct hlist_head mask_notifier_list;
> > > > > struct hlist_head irq_ack_notifier_list;
> > > > > + spinlock_t irq_notifier_list_lock;
> > > > > #endif
> > > > >
> > > > > #ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> > > > > diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> > > > > index 3dbba41..59c1cde 100644
> > > > > --- a/virt/kvm/irq_comm.c
> > > > > +++ b/virt/kvm/irq_comm.c
> > > > > @@ -191,17 +191,17 @@ void kvm_notify_acked_irq(struct kvm *kvm, unsigned irqchip, unsigned pin)
> > > > > void kvm_register_irq_ack_notifier(struct kvm *kvm,
> > > > > struct kvm_irq_ack_notifier *kian)
> > > > > {
> > > > > - mutex_lock(&kvm->irq_lock);
> > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > hlist_add_head_rcu(&kian->link, &kvm->irq_ack_notifier_list);
> > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > }
> > > > >
> > > > > void kvm_unregister_irq_ack_notifier(struct kvm *kvm,
> > > > > struct kvm_irq_ack_notifier *kian)
> > > > > {
> > > > > - mutex_lock(&kvm->irq_lock);
> > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > hlist_del_init_rcu(&kian->link);
> > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > synchronize_rcu();
> > > > > }
> > > > >
> > > > > @@ -247,18 +247,18 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > > > > void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
> > > > > struct kvm_irq_mask_notifier *kimn)
> > > > > {
> > > > > - mutex_lock(&kvm->irq_lock);
> > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > kimn->irq = irq;
> > > > > hlist_add_head_rcu(&kimn->link, &kvm->mask_notifier_list);
> > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > }
> > > > >
> > > > > void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq,
> > > > > struct kvm_irq_mask_notifier *kimn)
> > > > > {
> > > > > - mutex_lock(&kvm->irq_lock);
> > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > hlist_del_rcu(&kimn->link);
> > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > synchronize_rcu();
> > > > > }
> > > > >
> > > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > > > index 4e31634..018bde8 100644
> > > > > --- a/virt/kvm/kvm_main.c
> > > > > +++ b/virt/kvm/kvm_main.c
> > > > > @@ -948,6 +948,7 @@ static struct kvm *kvm_create_vm(void)
> > > > > spin_lock_init(&kvm->irq_routing_lock);
> > > > > INIT_HLIST_HEAD(&kvm->mask_notifier_list);
> > > > > INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);
> > > > > + spin_lock_init(&kvm->irq_notifier_list_lock);
> > > > > #endif
> > > > >
> > > > > #ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
> > > > > --
> > > > > 1.6.2.1
> > > > >
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > >
> > > --
> > > Gleb.
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> Gleb.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 34+ messages in thread* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 14:49 ` Michael S. Tsirkin
@ 2009-07-13 15:23 ` Gleb Natapov
2009-07-13 15:32 ` Gregory Haskins
2009-07-13 15:40 ` Michael S. Tsirkin
0 siblings, 2 replies; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 15:23 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 05:49:20PM +0300, Michael S. Tsirkin wrote:
> On Mon, Jul 13, 2009 at 05:37:50PM +0300, Gleb Natapov wrote:
> > On Mon, Jul 13, 2009 at 05:23:20PM +0300, Michael S. Tsirkin wrote:
> > > On Mon, Jul 13, 2009 at 02:48:44PM +0300, Gleb Natapov wrote:
> > > > On Mon, Jul 13, 2009 at 02:45:51PM +0300, Michael S. Tsirkin wrote:
> > > > > On Mon, Jul 13, 2009 at 12:12:33PM +0300, Gleb Natapov wrote:
> > > > > >
> > > > > > Signed-off-by: Gleb Natapov <gleb@redhat.com>
> > > > >
> > > > > This one is probably better off left as is,
> > > > What do you mean "as is"?
> > >
> > > This is a slow operation. It seems that we could use irq_lock or switch
> > > to slot lock or kvm lock here. Why do we need another one?
> > >
> > irq_lock is completely removed. So either we don't remove it and use it
> > here (and we don't need mutex so we change it to spinlock too), or we add
> > another lock with the name that actually tell us what its purpose. I prefer
> > second option. I am not sure you can use kvm lock without deadlock, and
> > slot lock? How this connected to slots management?!
> >
> > And this is not about speed of the operation. It is about making reader
> > lockless.
>
> So, to summarize: this patch does not help speed irq injection up, the
> only reason to change locking here is cosmetical. Is this a fair
> summary?
>
The whole series helps to speed irq injection up. This patch is one step
towards the goal.
>
> > > > > with RCU in place list modifications are slow anyway.
> > > > >
> > > > > > ---
> > > > > > include/linux/kvm_host.h | 1 +
> > > > > > virt/kvm/irq_comm.c | 16 ++++++++--------
> > > > > > virt/kvm/kvm_main.c | 1 +
> > > > > > 3 files changed, 10 insertions(+), 8 deletions(-)
> > > > > >
> > > > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > > > > index 3f2a4fc..12f4ee2 100644
> > > > > > --- a/include/linux/kvm_host.h
> > > > > > +++ b/include/linux/kvm_host.h
> > > > > > @@ -165,6 +165,7 @@ struct kvm {
> > > > > > spinlock_t irq_routing_lock;
> > > > > > struct hlist_head mask_notifier_list;
> > > > > > struct hlist_head irq_ack_notifier_list;
> > > > > > + spinlock_t irq_notifier_list_lock;
> > > > > > #endif
> > > > > >
> > > > > > #ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> > > > > > diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> > > > > > index 3dbba41..59c1cde 100644
> > > > > > --- a/virt/kvm/irq_comm.c
> > > > > > +++ b/virt/kvm/irq_comm.c
> > > > > > @@ -191,17 +191,17 @@ void kvm_notify_acked_irq(struct kvm *kvm, unsigned irqchip, unsigned pin)
> > > > > > void kvm_register_irq_ack_notifier(struct kvm *kvm,
> > > > > > struct kvm_irq_ack_notifier *kian)
> > > > > > {
> > > > > > - mutex_lock(&kvm->irq_lock);
> > > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > > hlist_add_head_rcu(&kian->link, &kvm->irq_ack_notifier_list);
> > > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > > }
> > > > > >
> > > > > > void kvm_unregister_irq_ack_notifier(struct kvm *kvm,
> > > > > > struct kvm_irq_ack_notifier *kian)
> > > > > > {
> > > > > > - mutex_lock(&kvm->irq_lock);
> > > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > > hlist_del_init_rcu(&kian->link);
> > > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > > synchronize_rcu();
> > > > > > }
> > > > > >
> > > > > > @@ -247,18 +247,18 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > > > > > void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
> > > > > > struct kvm_irq_mask_notifier *kimn)
> > > > > > {
> > > > > > - mutex_lock(&kvm->irq_lock);
> > > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > > kimn->irq = irq;
> > > > > > hlist_add_head_rcu(&kimn->link, &kvm->mask_notifier_list);
> > > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > > }
> > > > > >
> > > > > > void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq,
> > > > > > struct kvm_irq_mask_notifier *kimn)
> > > > > > {
> > > > > > - mutex_lock(&kvm->irq_lock);
> > > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > > hlist_del_rcu(&kimn->link);
> > > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > > synchronize_rcu();
> > > > > > }
> > > > > >
> > > > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > > > > index 4e31634..018bde8 100644
> > > > > > --- a/virt/kvm/kvm_main.c
> > > > > > +++ b/virt/kvm/kvm_main.c
> > > > > > @@ -948,6 +948,7 @@ static struct kvm *kvm_create_vm(void)
> > > > > > spin_lock_init(&kvm->irq_routing_lock);
> > > > > > INIT_HLIST_HEAD(&kvm->mask_notifier_list);
> > > > > > INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);
> > > > > > + spin_lock_init(&kvm->irq_notifier_list_lock);
> > > > > > #endif
> > > > > >
> > > > > > #ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
> > > > > > --
> > > > > > 1.6.2.1
> > > > > >
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > > > > the body of a message to majordomo@vger.kernel.org
> > > > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > > >
> > > > --
> > > > Gleb.
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> > --
> > Gleb.
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 15:23 ` Gleb Natapov
@ 2009-07-13 15:32 ` Gregory Haskins
2009-07-13 15:40 ` Michael S. Tsirkin
1 sibling, 0 replies; 34+ messages in thread
From: Gregory Haskins @ 2009-07-13 15:32 UTC (permalink / raw)
To: Gleb Natapov; +Cc: Michael S. Tsirkin, kvm, avi, mtosatti
[-- Attachment #1: Type: text/plain, Size: 1847 bytes --]
Gleb Natapov wrote:
> On Mon, Jul 13, 2009 at 05:49:20PM +0300, Michael S. Tsirkin wrote:
>
>> On Mon, Jul 13, 2009 at 05:37:50PM +0300, Gleb Natapov wrote:
>>
>>> On Mon, Jul 13, 2009 at 05:23:20PM +0300, Michael S. Tsirkin wrote:
>>>
>>>> On Mon, Jul 13, 2009 at 02:48:44PM +0300, Gleb Natapov wrote:
>>>>
>>>>> On Mon, Jul 13, 2009 at 02:45:51PM +0300, Michael S. Tsirkin wrote:
>>>>>
>>>>>> On Mon, Jul 13, 2009 at 12:12:33PM +0300, Gleb Natapov wrote:
>>>>>>
>>>>>>> Signed-off-by: Gleb Natapov <gleb@redhat.com>
>>>>>>>
>>>>>> This one is probably better off left as is,
>>>>>>
>>>>> What do you mean "as is"?
>>>>>
>>>> This is a slow operation. It seems that we could use irq_lock or switch
>>>> to slot lock or kvm lock here. Why do we need another one?
>>>>
>>>>
>>> irq_lock is completely removed. So either we don't remove it and use it
>>> here (and we don't need mutex so we change it to spinlock too), or we add
>>> another lock with the name that actually tell us what its purpose. I prefer
>>> second option. I am not sure you can use kvm lock without deadlock, and
>>> slot lock? How this connected to slots management?!
>>>
>>> And this is not about speed of the operation. It is about making reader
>>> lockless.
>>>
>> So, to summarize: this patch does not help speed irq injection up, the
>> only reason to change locking here is cosmetical. Is this a fair
>> summary?
>>
>>
> The whole series helps to speed irq injection up. This patch is one step
> towards the goal.
>
FWIW: Improving the injection path in the manner Gleb is proposing will
pave the way to skip the work-queue deferrment in the irqfd signal
path. This is a good thing.
Regards,
-Greg
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 266 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 15:23 ` Gleb Natapov
2009-07-13 15:32 ` Gregory Haskins
@ 2009-07-13 15:40 ` Michael S. Tsirkin
2009-07-13 16:28 ` Gleb Natapov
1 sibling, 1 reply; 34+ messages in thread
From: Michael S. Tsirkin @ 2009-07-13 15:40 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 06:23:11PM +0300, Gleb Natapov wrote:
> > So, to summarize: this patch does not help speed irq injection up, the
> > only reason to change locking here is cosmetical. Is this a fair
> > summary?
> >
> The whole series helps to speed irq injection up. This patch is one step
> towards the goal.
Let me rephrase: it's possible to drop patches 2 and 3 from the series
and irq injection will still be lockless. Correct?
--
MST
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 15:40 ` Michael S. Tsirkin
@ 2009-07-13 16:28 ` Gleb Natapov
0 siblings, 0 replies; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 16:28 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 06:40:07PM +0300, Michael S. Tsirkin wrote:
> On Mon, Jul 13, 2009 at 06:23:11PM +0300, Gleb Natapov wrote:
> > > So, to summarize: this patch does not help speed irq injection up, the
> > > only reason to change locking here is cosmetical. Is this a fair
> > > summary?
> > >
> > The whole series helps to speed irq injection up. This patch is one step
> > towards the goal.
>
> Let me rephrase: it's possible to drop patches 2 and 3 from the series
> and irq injection will still be lockless. Correct?
>
You can use here any one of million locks that is available in linux
that is not taken at the moment this functions are called. So yes you
can drop them.
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 14:37 ` Gleb Natapov
2009-07-13 14:49 ` Michael S. Tsirkin
@ 2009-07-13 16:23 ` Marcelo Tosatti
2009-07-13 16:31 ` Marcelo Tosatti
1 sibling, 1 reply; 34+ messages in thread
From: Marcelo Tosatti @ 2009-07-13 16:23 UTC (permalink / raw)
To: Gleb Natapov; +Cc: Michael S. Tsirkin, kvm, avi
On Mon, Jul 13, 2009 at 05:37:50PM +0300, Gleb Natapov wrote:
> On Mon, Jul 13, 2009 at 05:23:20PM +0300, Michael S. Tsirkin wrote:
> > On Mon, Jul 13, 2009 at 02:48:44PM +0300, Gleb Natapov wrote:
> > > On Mon, Jul 13, 2009 at 02:45:51PM +0300, Michael S. Tsirkin wrote:
> > > > On Mon, Jul 13, 2009 at 12:12:33PM +0300, Gleb Natapov wrote:
> > > > >
> > > > > Signed-off-by: Gleb Natapov <gleb@redhat.com>
> > > >
> > > > This one is probably better off left as is,
> > > What do you mean "as is"?
> >
> > This is a slow operation. It seems that we could use irq_lock or switch
> > to slot lock or kvm lock here. Why do we need another one?
> >
> irq_lock is completely removed. So either we don't remove it and use it
> here (and we don't need mutex so we change it to spinlock too), or we add
> another lock with the name that actually tell us what its purpose. I prefer
> second option. I am not sure you can use kvm lock without deadlock, and
> slot lock? How this connected to slots management?!
slots_lock is just a bad name now. See slots_lock is taken for read on
every exit. So taking slots_lock for write means all guests are stopped
in a known synchronization state.
If the write side of the operation is very unfrequent (such as
registration of irq ack/mask notifiers), down_write(slots_lock) works as
a simpler replacement for RCU.
We want to get rid of slots_lock on every exit at some point, though.
> And this is not about speed of the operation. It is about making reader
> lockless.
>
> > > > with RCU in place list modifications are slow anyway.
> > > >
> > > > > ---
> > > > > include/linux/kvm_host.h | 1 +
> > > > > virt/kvm/irq_comm.c | 16 ++++++++--------
> > > > > virt/kvm/kvm_main.c | 1 +
> > > > > 3 files changed, 10 insertions(+), 8 deletions(-)
> > > > >
> > > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > > > index 3f2a4fc..12f4ee2 100644
> > > > > --- a/include/linux/kvm_host.h
> > > > > +++ b/include/linux/kvm_host.h
> > > > > @@ -165,6 +165,7 @@ struct kvm {
> > > > > spinlock_t irq_routing_lock;
> > > > > struct hlist_head mask_notifier_list;
> > > > > struct hlist_head irq_ack_notifier_list;
> > > > > + spinlock_t irq_notifier_list_lock;
> > > > > #endif
> > > > >
> > > > > #ifdef KVM_ARCH_WANT_MMU_NOTIFIER
> > > > > diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
> > > > > index 3dbba41..59c1cde 100644
> > > > > --- a/virt/kvm/irq_comm.c
> > > > > +++ b/virt/kvm/irq_comm.c
> > > > > @@ -191,17 +191,17 @@ void kvm_notify_acked_irq(struct kvm *kvm, unsigned irqchip, unsigned pin)
> > > > > void kvm_register_irq_ack_notifier(struct kvm *kvm,
> > > > > struct kvm_irq_ack_notifier *kian)
> > > > > {
> > > > > - mutex_lock(&kvm->irq_lock);
> > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > hlist_add_head_rcu(&kian->link, &kvm->irq_ack_notifier_list);
> > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > }
> > > > >
> > > > > void kvm_unregister_irq_ack_notifier(struct kvm *kvm,
> > > > > struct kvm_irq_ack_notifier *kian)
> > > > > {
> > > > > - mutex_lock(&kvm->irq_lock);
> > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > hlist_del_init_rcu(&kian->link);
> > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > synchronize_rcu();
> > > > > }
> > > > >
> > > > > @@ -247,18 +247,18 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id)
> > > > > void kvm_register_irq_mask_notifier(struct kvm *kvm, int irq,
> > > > > struct kvm_irq_mask_notifier *kimn)
> > > > > {
> > > > > - mutex_lock(&kvm->irq_lock);
> > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > kimn->irq = irq;
> > > > > hlist_add_head_rcu(&kimn->link, &kvm->mask_notifier_list);
> > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > }
> > > > >
> > > > > void kvm_unregister_irq_mask_notifier(struct kvm *kvm, int irq,
> > > > > struct kvm_irq_mask_notifier *kimn)
> > > > > {
> > > > > - mutex_lock(&kvm->irq_lock);
> > > > > + spin_lock(&kvm->irq_notifier_list_lock);
> > > > > hlist_del_rcu(&kimn->link);
> > > > > - mutex_unlock(&kvm->irq_lock);
> > > > > + spin_unlock(&kvm->irq_notifier_list_lock);
> > > > > synchronize_rcu();
> > > > > }
> > > > >
> > > > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > > > index 4e31634..018bde8 100644
> > > > > --- a/virt/kvm/kvm_main.c
> > > > > +++ b/virt/kvm/kvm_main.c
> > > > > @@ -948,6 +948,7 @@ static struct kvm *kvm_create_vm(void)
> > > > > spin_lock_init(&kvm->irq_routing_lock);
> > > > > INIT_HLIST_HEAD(&kvm->mask_notifier_list);
> > > > > INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);
> > > > > + spin_lock_init(&kvm->irq_notifier_list_lock);
> > > > > #endif
> > > > >
> > > > > #ifdef KVM_COALESCED_MMIO_PAGE_OFFSET
> > > > > --
> > > > > 1.6.2.1
> > > > >
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > >
> > > --
> > > Gleb.
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 16:23 ` Marcelo Tosatti
@ 2009-07-13 16:31 ` Marcelo Tosatti
2009-07-13 16:35 ` Gleb Natapov
0 siblings, 1 reply; 34+ messages in thread
From: Marcelo Tosatti @ 2009-07-13 16:31 UTC (permalink / raw)
To: Gleb Natapov; +Cc: Michael S. Tsirkin, kvm, avi
On Mon, Jul 13, 2009 at 01:23:38PM -0300, Marcelo Tosatti wrote:
> On Mon, Jul 13, 2009 at 05:37:50PM +0300, Gleb Natapov wrote:
> > On Mon, Jul 13, 2009 at 05:23:20PM +0300, Michael S. Tsirkin wrote:
> > > On Mon, Jul 13, 2009 at 02:48:44PM +0300, Gleb Natapov wrote:
> > > > On Mon, Jul 13, 2009 at 02:45:51PM +0300, Michael S. Tsirkin wrote:
> > > > > On Mon, Jul 13, 2009 at 12:12:33PM +0300, Gleb Natapov wrote:
> > > > > >
> > > > > > Signed-off-by: Gleb Natapov <gleb@redhat.com>
> > > > >
> > > > > This one is probably better off left as is,
> > > > What do you mean "as is"?
> > >
> > > This is a slow operation. It seems that we could use irq_lock or switch
> > > to slot lock or kvm lock here. Why do we need another one?
> > >
> > irq_lock is completely removed. So either we don't remove it and use it
> > here (and we don't need mutex so we change it to spinlock too), or we add
> > another lock with the name that actually tell us what its purpose. I prefer
> > second option. I am not sure you can use kvm lock without deadlock, and
> > slot lock? How this connected to slots management?!
>
> slots_lock is just a bad name now. See slots_lock is taken for read on
> every exit. So taking slots_lock for write means all guests are stopped
^^^^^^
all vcpus
> in a known synchronization state.
>
> If the write side of the operation is very unfrequent (such as
> registration of irq ack/mask notifiers), down_write(slots_lock) works as
> a simpler replacement for RCU.
>
> We want to get rid of slots_lock on every exit at some point, though.
>
> > And this is not about speed of the operation. It is about making reader
> > lockless.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 16:31 ` Marcelo Tosatti
@ 2009-07-13 16:35 ` Gleb Natapov
2009-07-13 16:43 ` Marcelo Tosatti
0 siblings, 1 reply; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 16:35 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: Michael S. Tsirkin, kvm, avi
On Mon, Jul 13, 2009 at 01:31:06PM -0300, Marcelo Tosatti wrote:
> > slots_lock is just a bad name now. See slots_lock is taken for read on
> > every exit. So taking slots_lock for write means all guests are stopped
> ^^^^^^
>
> all vcpus
>
That is a very good reason to not wanting slots_lock near interrupt
injection path.
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 3/5] Move irq notifiers lists to its own locking.
2009-07-13 16:35 ` Gleb Natapov
@ 2009-07-13 16:43 ` Marcelo Tosatti
0 siblings, 0 replies; 34+ messages in thread
From: Marcelo Tosatti @ 2009-07-13 16:43 UTC (permalink / raw)
To: Gleb Natapov; +Cc: Michael S. Tsirkin, kvm, avi
On Mon, Jul 13, 2009 at 07:35:41PM +0300, Gleb Natapov wrote:
> On Mon, Jul 13, 2009 at 01:31:06PM -0300, Marcelo Tosatti wrote:
> > > slots_lock is just a bad name now. See slots_lock is taken for read on
> > > every exit. So taking slots_lock for write means all guests are stopped
> > ^^^^^^
> >
> > all vcpus
> >
> That is a very good reason to not wanting slots_lock near interrupt
> injection path.
Indeed.
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH 4/5] Move IO APIC to its own lock.
2009-07-13 9:12 [PATCH 0/5][RFC] more fine grained locking for IRQ injection Gleb Natapov
` (2 preceding siblings ...)
2009-07-13 9:12 ` [PATCH 3/5] Move irq notifiers lists " Gleb Natapov
@ 2009-07-13 9:12 ` Gleb Natapov
2009-07-13 9:12 ` [PATCH 5/5] Drop kvm->irq_lock lock Gleb Natapov
2009-07-13 13:23 ` [PATCH 0/5][RFC] more fine grained locking for IRQ injection Michael S. Tsirkin
5 siblings, 0 replies; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 9:12 UTC (permalink / raw)
To: kvm; +Cc: avi, mtosatti
Signed-off-by: Gleb Natapov <gleb@redhat.com>
---
arch/ia64/kvm/kvm-ia64.c | 27 ++++++++++++++++++------
arch/x86/kvm/lapic.c | 5 +---
arch/x86/kvm/x86.c | 30 ++++++++++++++++++---------
virt/kvm/ioapic.c | 50 ++++++++++++++++++++++++++++-----------------
virt/kvm/ioapic.h | 1 +
5 files changed, 73 insertions(+), 40 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index 0ad09f0..dd7ef2d 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -850,9 +850,16 @@ static int kvm_vm_ioctl_get_irqchip(struct kvm *kvm,
r = 0;
switch (chip->chip_id) {
- case KVM_IRQCHIP_IOAPIC:
- memcpy(&chip->chip.ioapic, ioapic_irqchip(kvm),
- sizeof(struct kvm_ioapic_state));
+ case KVM_IRQCHIP_IOAPIC: {
+ struct kvm_ioapic *ioapic = ioapic_irqchip(kvm);
+ if (ioapic) {
+ spin_lock(&ioapic->lock);
+ memcpy(&chip->chip.ioapic, ioapic,
+ sizeof(struct kvm_ioapic_state));
+ spin_unlock(&ioapic->lock);
+ } else
+ r = -EINVAL;
+ }
break;
default:
r = -EINVAL;
@@ -867,10 +874,16 @@ static int kvm_vm_ioctl_set_irqchip(struct kvm *kvm, struct kvm_irqchip *chip)
r = 0;
switch (chip->chip_id) {
- case KVM_IRQCHIP_IOAPIC:
- memcpy(ioapic_irqchip(kvm),
- &chip->chip.ioapic,
- sizeof(struct kvm_ioapic_state));
+ case KVM_IRQCHIP_IOAPIC: {
+ struct kvm_ioapic *ioapic = ioapic_irqchip(kvm);
+ if (ioapic) {
+ spin_lock(&ioapic->lock);
+ memcpy(ioapic, &chip->chip.ioapic,
+ sizeof(struct kvm_ioapic_state));
+ spin_unlock(&ioapic->lock);
+ } else
+ r = -EINVAL;
+ }
break;
default:
r = -EINVAL;
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index e2e2849..61ffcfc 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -471,11 +471,8 @@ static void apic_set_eoi(struct kvm_lapic *apic)
trigger_mode = IOAPIC_LEVEL_TRIG;
else
trigger_mode = IOAPIC_EDGE_TRIG;
- if (!(apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI)) {
- mutex_lock(&apic->vcpu->kvm->irq_lock);
+ if (!(apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI))
kvm_ioapic_update_eoi(apic->vcpu->kvm, vector, trigger_mode);
- mutex_unlock(&apic->vcpu->kvm->irq_lock);
- }
}
static void apic_send_ipi(struct kvm_lapic *apic)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 00844eb..7d99d4f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2005,10 +2005,16 @@ static int kvm_vm_ioctl_get_irqchip(struct kvm *kvm, struct kvm_irqchip *chip)
&pic_irqchip(kvm)->pics[1],
sizeof(struct kvm_pic_state));
break;
- case KVM_IRQCHIP_IOAPIC:
- memcpy(&chip->chip.ioapic,
- ioapic_irqchip(kvm),
- sizeof(struct kvm_ioapic_state));
+ case KVM_IRQCHIP_IOAPIC: {
+ struct kvm_ioapic *ioapic = ioapic_irqchip(kvm);
+ if (ioapic) {
+ spin_lock(&ioapic->lock);
+ memcpy(&chip->chip.ioapic, ioapic,
+ sizeof(struct kvm_ioapic_state));
+ spin_unlock(&ioapic->lock);
+ } else
+ r = -EINVAL;
+ }
break;
default:
r = -EINVAL;
@@ -2037,12 +2043,16 @@ static int kvm_vm_ioctl_set_irqchip(struct kvm *kvm, struct kvm_irqchip *chip)
sizeof(struct kvm_pic_state));
spin_unlock(&pic_irqchip(kvm)->lock);
break;
- case KVM_IRQCHIP_IOAPIC:
- mutex_lock(&kvm->irq_lock);
- memcpy(ioapic_irqchip(kvm),
- &chip->chip.ioapic,
- sizeof(struct kvm_ioapic_state));
- mutex_unlock(&kvm->irq_lock);
+ case KVM_IRQCHIP_IOAPIC: {
+ struct kvm_ioapic *ioapic = ioapic_irqchip(kvm);
+ if (ioapic) {
+ spin_lock(&ioapic->lock);
+ memcpy(ioapic, &chip->chip.ioapic,
+ sizeof(struct kvm_ioapic_state));
+ spin_unlock(&ioapic->lock);
+ } else
+ r = -EINVAL;
+ }
break;
default:
r = -EINVAL;
diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c
index b91fbb2..c7892ea 100644
--- a/virt/kvm/ioapic.c
+++ b/virt/kvm/ioapic.c
@@ -182,6 +182,7 @@ int kvm_ioapic_set_irq(struct kvm_ioapic *ioapic, int irq, int level)
union kvm_ioapic_redirect_entry entry;
int ret = 1;
+ spin_lock(&ioapic->lock);
if (irq >= 0 && irq < IOAPIC_NUM_PINS) {
entry = ioapic->redirtbl[irq];
level ^= entry.fields.polarity;
@@ -196,34 +197,44 @@ int kvm_ioapic_set_irq(struct kvm_ioapic *ioapic, int irq, int level)
}
trace_kvm_ioapic_set_irq(entry.bits, irq, ret == 0);
}
+ spin_unlock(&ioapic->lock);
+
return ret;
}
-static void __kvm_ioapic_update_eoi(struct kvm_ioapic *ioapic, int pin,
- int trigger_mode)
+static void __kvm_ioapic_update_eoi(struct kvm_ioapic *ioapic, int vector,
+ int trigger_mode)
{
- union kvm_ioapic_redirect_entry *ent;
+ int i;
+
+ for (i = 0; i < IOAPIC_NUM_PINS; i++) {
+ union kvm_ioapic_redirect_entry *ent = &ioapic->redirtbl[i];
+
+ if (ent->fields.vector != vector)
+ continue;
- ent = &ioapic->redirtbl[pin];
+ /* ack notifier may call back into ioapic via kvm_set_irq() */
+ spin_unlock(&ioapic->lock);
+ kvm_notify_acked_irq(ioapic->kvm, KVM_IRQCHIP_IOAPIC, i);
+ spin_lock(&ioapic->lock);
- kvm_notify_acked_irq(ioapic->kvm, KVM_IRQCHIP_IOAPIC, pin);
+ if (trigger_mode != IOAPIC_LEVEL_TRIG)
+ continue;
- if (trigger_mode == IOAPIC_LEVEL_TRIG) {
ASSERT(ent->fields.trig_mode == IOAPIC_LEVEL_TRIG);
ent->fields.remote_irr = 0;
- if (!ent->fields.mask && (ioapic->irr & (1 << pin)))
- ioapic_service(ioapic, pin);
+ if (!ent->fields.mask && (ioapic->irr & (1 << i)))
+ ioapic_service(ioapic, i);
}
}
void kvm_ioapic_update_eoi(struct kvm *kvm, int vector, int trigger_mode)
{
struct kvm_ioapic *ioapic = kvm->arch.vioapic;
- int i;
- for (i = 0; i < IOAPIC_NUM_PINS; i++)
- if (ioapic->redirtbl[i].fields.vector == vector)
- __kvm_ioapic_update_eoi(ioapic, i, trigger_mode);
+ spin_lock(&ioapic->lock);
+ __kvm_ioapic_update_eoi(ioapic, vector, trigger_mode);
+ spin_unlock(&ioapic->lock);
}
static inline struct kvm_ioapic *to_ioapic(struct kvm_io_device *dev)
@@ -248,8 +259,8 @@ static int ioapic_mmio_read(struct kvm_io_device *this, gpa_t addr, int len,
ioapic_debug("addr %lx\n", (unsigned long)addr);
ASSERT(!(addr & 0xf)); /* check alignment */
- mutex_lock(&ioapic->kvm->irq_lock);
addr &= 0xff;
+ spin_lock(&ioapic->lock);
switch (addr) {
case IOAPIC_REG_SELECT:
result = ioapic->ioregsel;
@@ -263,6 +274,8 @@ static int ioapic_mmio_read(struct kvm_io_device *this, gpa_t addr, int len,
result = 0;
break;
}
+ spin_unlock(&ioapic->lock);
+
switch (len) {
case 8:
*(u64 *) val = result;
@@ -275,7 +288,6 @@ static int ioapic_mmio_read(struct kvm_io_device *this, gpa_t addr, int len,
default:
printk(KERN_WARNING "ioapic: wrong length %d\n", len);
}
- mutex_unlock(&ioapic->kvm->irq_lock);
return 0;
}
@@ -291,15 +303,15 @@ static int ioapic_mmio_write(struct kvm_io_device *this, gpa_t addr, int len,
(void*)addr, len, val);
ASSERT(!(addr & 0xf)); /* check alignment */
- mutex_lock(&ioapic->kvm->irq_lock);
if (len == 4 || len == 8)
data = *(u32 *) val;
else {
printk(KERN_WARNING "ioapic: Unsupported size %d\n", len);
- goto unlock;
+ return 0;
}
addr &= 0xff;
+ spin_lock(&ioapic->lock);
switch (addr) {
case IOAPIC_REG_SELECT:
ioapic->ioregsel = data;
@@ -310,15 +322,14 @@ static int ioapic_mmio_write(struct kvm_io_device *this, gpa_t addr, int len,
break;
#ifdef CONFIG_IA64
case IOAPIC_REG_EOI:
- kvm_ioapic_update_eoi(ioapic->kvm, data, IOAPIC_LEVEL_TRIG);
+ __kvm_ioapic_update_eoi(ioapic, data, IOAPIC_LEVEL_TRIG);
break;
#endif
default:
break;
}
-unlock:
- mutex_unlock(&ioapic->kvm->irq_lock);
+ spin_unlock(&ioapic->lock);
return 0;
}
@@ -346,6 +357,7 @@ int kvm_ioapic_init(struct kvm *kvm)
ioapic = kzalloc(sizeof(struct kvm_ioapic), GFP_KERNEL);
if (!ioapic)
return -ENOMEM;
+ spin_lock_init(&ioapic->lock);
kvm->arch.vioapic = ioapic;
kvm_ioapic_reset(ioapic);
kvm_iodevice_init(&ioapic->dev, &ioapic_mmio_ops);
diff --git a/virt/kvm/ioapic.h b/virt/kvm/ioapic.h
index 7080b71..557107e 100644
--- a/virt/kvm/ioapic.h
+++ b/virt/kvm/ioapic.h
@@ -44,6 +44,7 @@ struct kvm_ioapic {
struct kvm_io_device dev;
struct kvm *kvm;
void (*ack_notifier)(void *opaque, int irq);
+ spinlock_t lock;
};
#ifdef DEBUG
--
1.6.2.1
^ permalink raw reply related [flat|nested] 34+ messages in thread* [PATCH 5/5] Drop kvm->irq_lock lock.
2009-07-13 9:12 [PATCH 0/5][RFC] more fine grained locking for IRQ injection Gleb Natapov
` (3 preceding siblings ...)
2009-07-13 9:12 ` [PATCH 4/5] Move IO APIC to its own lock Gleb Natapov
@ 2009-07-13 9:12 ` Gleb Natapov
2009-07-13 13:23 ` [PATCH 0/5][RFC] more fine grained locking for IRQ injection Michael S. Tsirkin
5 siblings, 0 replies; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 9:12 UTC (permalink / raw)
To: kvm; +Cc: avi, mtosatti
The only thing it protects now is interrupt injection into lapic and
this can work lockless. Even now with kvm->irq_lock in place access
to lapic is not entirely serialized since vcpu access doesn't take
kvm->irq_lock.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
---
arch/ia64/kvm/kvm-ia64.c | 2 --
arch/x86/kvm/i8254.c | 2 --
arch/x86/kvm/lapic.c | 2 --
arch/x86/kvm/x86.c | 2 --
include/linux/kvm_host.h | 1 -
virt/kvm/eventfd.c | 2 --
virt/kvm/irq_comm.c | 6 +-----
virt/kvm/kvm_main.c | 5 +----
8 files changed, 2 insertions(+), 20 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index dd7ef2d..8f1fc3a 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -998,10 +998,8 @@ long kvm_arch_vm_ioctl(struct file *filp,
goto out;
if (irqchip_in_kernel(kvm)) {
__s32 status;
- mutex_lock(&kvm->irq_lock);
status = kvm_set_irq(kvm, KVM_USERSPACE_IRQ_SOURCE_ID,
irq_event.irq, irq_event.level);
- mutex_unlock(&kvm->irq_lock);
if (ioctl == KVM_IRQ_LINE_STATUS) {
irq_event.status = status;
if (copy_to_user(argp, &irq_event,
diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index e1b9016..6fde80a 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -660,10 +660,8 @@ static void __inject_pit_timer_intr(struct kvm *kvm)
struct kvm_vcpu *vcpu;
int i;
- mutex_lock(&kvm->irq_lock);
kvm_set_irq(kvm, kvm->arch.vpit->irq_source_id, 0, 1);
kvm_set_irq(kvm, kvm->arch.vpit->irq_source_id, 0, 0);
- mutex_unlock(&kvm->irq_lock);
/*
* Provides NMI watchdog support via Virtual Wire mode.
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 61ffcfc..0380107 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -501,9 +501,7 @@ static void apic_send_ipi(struct kvm_lapic *apic)
irq.trig_mode, irq.level, irq.dest_mode, irq.delivery_mode,
irq.vector);
- mutex_lock(&apic->vcpu->kvm->irq_lock);
kvm_irq_delivery_to_apic(apic->vcpu->kvm, apic, &irq);
- mutex_unlock(&apic->vcpu->kvm->irq_lock);
}
static u32 apic_get_tmcct(struct kvm_lapic *apic)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7d99d4f..74605a7 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2230,10 +2230,8 @@ long kvm_arch_vm_ioctl(struct file *filp,
goto out;
if (irqchip_in_kernel(kvm)) {
__s32 status;
- mutex_lock(&kvm->irq_lock);
status = kvm_set_irq(kvm, KVM_USERSPACE_IRQ_SOURCE_ID,
irq_event.irq, irq_event.level);
- mutex_unlock(&kvm->irq_lock);
if (ioctl == KVM_IRQ_LINE_STATUS) {
irq_event.status = status;
if (copy_to_user(argp, &irq_event,
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 12f4ee2..4cea870 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -159,7 +159,6 @@ struct kvm {
struct kvm_coalesced_mmio_ring *coalesced_mmio_ring;
#endif
- struct mutex irq_lock;
#ifdef CONFIG_HAVE_KVM_IRQCHIP
struct kvm_kernel_irq_routing_entry *irq_routing;
spinlock_t irq_routing_lock;
diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
index 4092b8d..3518c49 100644
--- a/virt/kvm/eventfd.c
+++ b/virt/kvm/eventfd.c
@@ -57,10 +57,8 @@ irqfd_inject(struct work_struct *work)
struct _irqfd *irqfd = container_of(work, struct _irqfd, inject);
struct kvm *kvm = irqfd->kvm;
- mutex_lock(&kvm->irq_lock);
kvm_set_irq(kvm, KVM_USERSPACE_IRQ_SOURCE_ID, irqfd->gsi, 1);
kvm_set_irq(kvm, KVM_USERSPACE_IRQ_SOURCE_ID, irqfd->gsi, 0);
- mutex_unlock(&kvm->irq_lock);
}
/*
diff --git a/virt/kvm/irq_comm.c b/virt/kvm/irq_comm.c
index 59c1cde..c54a28b 100644
--- a/virt/kvm/irq_comm.c
+++ b/virt/kvm/irq_comm.c
@@ -63,8 +63,6 @@ int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src,
int i, r = -1;
struct kvm_vcpu *vcpu, *lowest = NULL;
- WARN_ON(!mutex_is_locked(&kvm->irq_lock));
-
if (irq->dest_mode == 0 && irq->dest_id == 0xff &&
kvm_is_dm_lowest_prio(irq))
printk(KERN_INFO "kvm: apic: phys broadcast and lowest prio\n");
@@ -116,7 +114,7 @@ static int kvm_set_msi(struct kvm_kernel_irq_routing_entry *e,
return kvm_irq_delivery_to_apic(kvm, NULL, &irq);
}
-/* This should be called with the kvm->irq_lock mutex held
+/*
* Return value:
* < 0 Interrupt was ignored (masked or not delivered for other reasons)
* = 0 Interrupt was coalesced (previous irq is still pending)
@@ -130,8 +128,6 @@ int kvm_set_irq(struct kvm *kvm, int irq_source_id, int irq, int level)
trace_kvm_set_irq(irq, level, irq_source_id);
- WARN_ON(!mutex_is_locked(&kvm->irq_lock));
-
if (irq < KVM_IOAPIC_NUM_PINS) {
irq_state = (unsigned long *)&kvm->arch.irq_states[irq];
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 018bde8..eb587b9 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -68,7 +68,7 @@ MODULE_LICENSE("GPL");
/*
* Ordering of locks:
*
- * kvm->slots_lock --> kvm->lock --> kvm->irq_lock
+ * kvm->slots_lock --> kvm->lock
*/
DEFINE_SPINLOCK(kvm_lock);
@@ -137,7 +137,6 @@ static void kvm_assigned_dev_interrupt_work_handler(struct work_struct *work)
interrupt_work);
kvm = assigned_dev->kvm;
- mutex_lock(&kvm->irq_lock);
spin_lock_irq(&assigned_dev->assigned_dev_lock);
if (assigned_dev->irq_requested_type & KVM_DEV_IRQ_HOST_MSIX) {
struct kvm_guest_msix_entry *guest_entries =
@@ -156,7 +155,6 @@ static void kvm_assigned_dev_interrupt_work_handler(struct work_struct *work)
assigned_dev->guest_irq, 1);
spin_unlock_irq(&assigned_dev->assigned_dev_lock);
- mutex_unlock(&assigned_dev->kvm->irq_lock);
}
static irqreturn_t kvm_assigned_dev_intr(int irq, void *dev_id)
@@ -983,7 +981,6 @@ static struct kvm *kvm_create_vm(void)
kvm_io_bus_init(&kvm->pio_bus);
kvm_irqfd_init(kvm);
mutex_init(&kvm->lock);
- mutex_init(&kvm->irq_lock);
kvm_io_bus_init(&kvm->mmio_bus);
init_rwsem(&kvm->slots_lock);
atomic_set(&kvm->users_count, 1);
--
1.6.2.1
^ permalink raw reply related [flat|nested] 34+ messages in thread* Re: [PATCH 0/5][RFC] more fine grained locking for IRQ injection
2009-07-13 9:12 [PATCH 0/5][RFC] more fine grained locking for IRQ injection Gleb Natapov
` (4 preceding siblings ...)
2009-07-13 9:12 ` [PATCH 5/5] Drop kvm->irq_lock lock Gleb Natapov
@ 2009-07-13 13:23 ` Michael S. Tsirkin
2009-07-13 13:28 ` Gleb Natapov
5 siblings, 1 reply; 34+ messages in thread
From: Michael S. Tsirkin @ 2009-07-13 13:23 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 12:12:30PM +0300, Gleb Natapov wrote:
> kvm->irq_lock protects too much stuff, but still fail to protect
> everything it was design to protect (see ack notifiers call in pic). I
> want to make IRQ injection logic use more fine grained locking.
At least irq routing changes and notifier list changes
do not seem to be involved in irq injection.
So why do we want fine-grained locking there?
> This patch
> series split kvm->irq_lock mutex to smaller spinlocks each one protects
> only one thing. Irq routing, irq notifier lists and ioapic gain their own
> spinlock. pic is already uses its own lock. This patch series also makes
> interrupt injection to lapic lockless (several kvm_irq_delivery_to_apic()
> may run in parallel), but access to lapic was never fully locked in the
> first place. VCPU could access lapic in parallel with interrupt injection.
>
> The patch series is on top of my previous series that converts ack notifiers
> to the RCU locking.
>
> Gleb Natapov (5):
> Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
> Move irq routing to its own locking.
> Move irq notifiers lists to its own locking.
> Move IO APIC to its own lock.
> Drop kvm->irq_lock lock.
>
> arch/ia64/kvm/kvm-ia64.c | 29 +++++++++++++++++++--------
> arch/x86/kvm/i8254.c | 4 +-
> arch/x86/kvm/lapic.c | 7 +-----
> arch/x86/kvm/x86.c | 32 +++++++++++++++++++-----------
> include/linux/kvm_host.h | 3 +-
> virt/kvm/eventfd.c | 2 -
> virt/kvm/ioapic.c | 47 +++++++++++++++++++++++++++------------------
> virt/kvm/ioapic.h | 1 +
> virt/kvm/irq_comm.c | 35 ++++++++++++++-------------------
> virt/kvm/kvm_main.c | 7 ++---
> 10 files changed, 92 insertions(+), 75 deletions(-)
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 34+ messages in thread* Re: [PATCH 0/5][RFC] more fine grained locking for IRQ injection
2009-07-13 13:23 ` [PATCH 0/5][RFC] more fine grained locking for IRQ injection Michael S. Tsirkin
@ 2009-07-13 13:28 ` Gleb Natapov
2009-07-13 13:53 ` Michael S. Tsirkin
0 siblings, 1 reply; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 13:28 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 04:23:36PM +0300, Michael S. Tsirkin wrote:
> On Mon, Jul 13, 2009 at 12:12:30PM +0300, Gleb Natapov wrote:
> > kvm->irq_lock protects too much stuff, but still fail to protect
> > everything it was design to protect (see ack notifiers call in pic). I
> > want to make IRQ injection logic use more fine grained locking.
>
> At least irq routing changes and notifier list changes
> do not seem to be involved in irq injection.
> So why do we want fine-grained locking there?
>
When you have one big lock and you want to eliminate it you look at all
things it protects and you start introducing different locking for
unrelated stuff. This is what this patch series does, so I don't really
get you point.
> > This patch
> > series split kvm->irq_lock mutex to smaller spinlocks each one protects
> > only one thing. Irq routing, irq notifier lists and ioapic gain their own
> > spinlock. pic is already uses its own lock. This patch series also makes
> > interrupt injection to lapic lockless (several kvm_irq_delivery_to_apic()
> > may run in parallel), but access to lapic was never fully locked in the
> > first place. VCPU could access lapic in parallel with interrupt injection.
> >
> > The patch series is on top of my previous series that converts ack notifiers
> > to the RCU locking.
> >
> > Gleb Natapov (5):
> > Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
> > Move irq routing to its own locking.
> > Move irq notifiers lists to its own locking.
> > Move IO APIC to its own lock.
> > Drop kvm->irq_lock lock.
> >
> > arch/ia64/kvm/kvm-ia64.c | 29 +++++++++++++++++++--------
> > arch/x86/kvm/i8254.c | 4 +-
> > arch/x86/kvm/lapic.c | 7 +-----
> > arch/x86/kvm/x86.c | 32 +++++++++++++++++++-----------
> > include/linux/kvm_host.h | 3 +-
> > virt/kvm/eventfd.c | 2 -
> > virt/kvm/ioapic.c | 47 +++++++++++++++++++++++++++------------------
> > virt/kvm/ioapic.h | 1 +
> > virt/kvm/irq_comm.c | 35 ++++++++++++++-------------------
> > virt/kvm/kvm_main.c | 7 ++---
> > 10 files changed, 92 insertions(+), 75 deletions(-)
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 0/5][RFC] more fine grained locking for IRQ injection
2009-07-13 13:28 ` Gleb Natapov
@ 2009-07-13 13:53 ` Michael S. Tsirkin
2009-07-13 13:58 ` Gleb Natapov
0 siblings, 1 reply; 34+ messages in thread
From: Michael S. Tsirkin @ 2009-07-13 13:53 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 04:28:09PM +0300, Gleb Natapov wrote:
> On Mon, Jul 13, 2009 at 04:23:36PM +0300, Michael S. Tsirkin wrote:
> > On Mon, Jul 13, 2009 at 12:12:30PM +0300, Gleb Natapov wrote:
> > > kvm->irq_lock protects too much stuff, but still fail to protect
> > > everything it was design to protect (see ack notifiers call in pic). I
> > > want to make IRQ injection logic use more fine grained locking.
> >
> > At least irq routing changes and notifier list changes
> > do not seem to be involved in irq injection.
> > So why do we want fine-grained locking there?
> >
> When you have one big lock and you want to eliminate it you look at all
> things it protects and you start introducing different locking for
> unrelated stuff. This is what this patch series does, so I don't really
> get you point.
But why do you want to eliminate it? What is the operation that this
will speed up? Is there a contention issue and which operations
contend on the lock?
> > > This patch
> > > series split kvm->irq_lock mutex to smaller spinlocks each one protects
> > > only one thing. Irq routing, irq notifier lists and ioapic gain their own
> > > spinlock. pic is already uses its own lock. This patch series also makes
> > > interrupt injection to lapic lockless (several kvm_irq_delivery_to_apic()
> > > may run in parallel), but access to lapic was never fully locked in the
> > > first place. VCPU could access lapic in parallel with interrupt injection.
> > >
> > > The patch series is on top of my previous series that converts ack notifiers
> > > to the RCU locking.
> > >
> > > Gleb Natapov (5):
> > > Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
> > > Move irq routing to its own locking.
> > > Move irq notifiers lists to its own locking.
> > > Move IO APIC to its own lock.
> > > Drop kvm->irq_lock lock.
> > >
> > > arch/ia64/kvm/kvm-ia64.c | 29 +++++++++++++++++++--------
> > > arch/x86/kvm/i8254.c | 4 +-
> > > arch/x86/kvm/lapic.c | 7 +-----
> > > arch/x86/kvm/x86.c | 32 +++++++++++++++++++-----------
> > > include/linux/kvm_host.h | 3 +-
> > > virt/kvm/eventfd.c | 2 -
> > > virt/kvm/ioapic.c | 47 +++++++++++++++++++++++++++------------------
> > > virt/kvm/ioapic.h | 1 +
> > > virt/kvm/irq_comm.c | 35 ++++++++++++++-------------------
> > > virt/kvm/kvm_main.c | 7 ++---
> > > 10 files changed, 92 insertions(+), 75 deletions(-)
> > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> Gleb.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 0/5][RFC] more fine grained locking for IRQ injection
2009-07-13 13:53 ` Michael S. Tsirkin
@ 2009-07-13 13:58 ` Gleb Natapov
2009-07-13 14:21 ` Michael S. Tsirkin
0 siblings, 1 reply; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 13:58 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 04:53:15PM +0300, Michael S. Tsirkin wrote:
> On Mon, Jul 13, 2009 at 04:28:09PM +0300, Gleb Natapov wrote:
> > On Mon, Jul 13, 2009 at 04:23:36PM +0300, Michael S. Tsirkin wrote:
> > > On Mon, Jul 13, 2009 at 12:12:30PM +0300, Gleb Natapov wrote:
> > > > kvm->irq_lock protects too much stuff, but still fail to protect
> > > > everything it was design to protect (see ack notifiers call in pic). I
> > > > want to make IRQ injection logic use more fine grained locking.
> > >
> > > At least irq routing changes and notifier list changes
> > > do not seem to be involved in irq injection.
> > > So why do we want fine-grained locking there?
> > >
> > When you have one big lock and you want to eliminate it you look at all
> > things it protects and you start introducing different locking for
> > unrelated stuff. This is what this patch series does, so I don't really
> > get you point.
>
> But why do you want to eliminate it? What is the operation that this
> will speed up? Is there a contention issue and which operations
> contend on the lock?
>
Interrupt injection. I want to be able to inject interrupts without the
lock at all. At least msi one. PCI/ISA interrupts still take lock in
ioapic code. It can be eliminated too if needed.
> > > > This patch
> > > > series split kvm->irq_lock mutex to smaller spinlocks each one protects
> > > > only one thing. Irq routing, irq notifier lists and ioapic gain their own
> > > > spinlock. pic is already uses its own lock. This patch series also makes
> > > > interrupt injection to lapic lockless (several kvm_irq_delivery_to_apic()
> > > > may run in parallel), but access to lapic was never fully locked in the
> > > > first place. VCPU could access lapic in parallel with interrupt injection.
> > > >
> > > > The patch series is on top of my previous series that converts ack notifiers
> > > > to the RCU locking.
> > > >
> > > > Gleb Natapov (5):
> > > > Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
> > > > Move irq routing to its own locking.
> > > > Move irq notifiers lists to its own locking.
> > > > Move IO APIC to its own lock.
> > > > Drop kvm->irq_lock lock.
> > > >
> > > > arch/ia64/kvm/kvm-ia64.c | 29 +++++++++++++++++++--------
> > > > arch/x86/kvm/i8254.c | 4 +-
> > > > arch/x86/kvm/lapic.c | 7 +-----
> > > > arch/x86/kvm/x86.c | 32 +++++++++++++++++++-----------
> > > > include/linux/kvm_host.h | 3 +-
> > > > virt/kvm/eventfd.c | 2 -
> > > > virt/kvm/ioapic.c | 47 +++++++++++++++++++++++++++------------------
> > > > virt/kvm/ioapic.h | 1 +
> > > > virt/kvm/irq_comm.c | 35 ++++++++++++++-------------------
> > > > virt/kvm/kvm_main.c | 7 ++---
> > > > 10 files changed, 92 insertions(+), 75 deletions(-)
> > > >
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > > the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> > --
> > Gleb.
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 0/5][RFC] more fine grained locking for IRQ injection
2009-07-13 13:58 ` Gleb Natapov
@ 2009-07-13 14:21 ` Michael S. Tsirkin
2009-07-13 14:33 ` Gleb Natapov
0 siblings, 1 reply; 34+ messages in thread
From: Michael S. Tsirkin @ 2009-07-13 14:21 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 04:58:58PM +0300, Gleb Natapov wrote:
> On Mon, Jul 13, 2009 at 04:53:15PM +0300, Michael S. Tsirkin wrote:
> > On Mon, Jul 13, 2009 at 04:28:09PM +0300, Gleb Natapov wrote:
> > > On Mon, Jul 13, 2009 at 04:23:36PM +0300, Michael S. Tsirkin wrote:
> > > > On Mon, Jul 13, 2009 at 12:12:30PM +0300, Gleb Natapov wrote:
> > > > > kvm->irq_lock protects too much stuff, but still fail to protect
> > > > > everything it was design to protect (see ack notifiers call in pic). I
> > > > > want to make IRQ injection logic use more fine grained locking.
> > > >
> > > > At least irq routing changes and notifier list changes
> > > > do not seem to be involved in irq injection.
> > > > So why do we want fine-grained locking there?
> > > >
> > > When you have one big lock and you want to eliminate it you look at all
> > > things it protects and you start introducing different locking for
> > > unrelated stuff. This is what this patch series does, so I don't really
> > > get you point.
> >
> > But why do you want to eliminate it? What is the operation that this
> > will speed up? Is there a contention issue and which operations
> > contend on the lock?
> >
> Interrupt injection. I want to be able to inject interrupts without the
> lock at all.
You are talking about the lapic changes below, right? OK. But how do
the other changes help, specifically? I'm less familiar with ioapic, but
I think irq routing and irq notifier list changes are done off data
path.
> At least msi one. PCI/ISA interrupts still take lock in
> ioapic code. It can be eliminated too if needed.
>
> > > > > This patch
> > > > > series split kvm->irq_lock mutex to smaller spinlocks each one protects
> > > > > only one thing. Irq routing, irq notifier lists and ioapic gain their own
> > > > > spinlock. pic is already uses its own lock. This patch series also makes
> > > > > interrupt injection to lapic lockless (several kvm_irq_delivery_to_apic()
> > > > > may run in parallel), but access to lapic was never fully locked in the
> > > > > first place. VCPU could access lapic in parallel with interrupt injection.
> > > > >
> > > > > The patch series is on top of my previous series that converts ack notifiers
> > > > > to the RCU locking.
> > > > >
> > > > > Gleb Natapov (5):
> > > > > Protect irq_sources_bitmap by kvm->lock instead of kvm->irq_lock
> > > > > Move irq routing to its own locking.
> > > > > Move irq notifiers lists to its own locking.
> > > > > Move IO APIC to its own lock.
> > > > > Drop kvm->irq_lock lock.
> > > > >
> > > > > arch/ia64/kvm/kvm-ia64.c | 29 +++++++++++++++++++--------
> > > > > arch/x86/kvm/i8254.c | 4 +-
> > > > > arch/x86/kvm/lapic.c | 7 +-----
> > > > > arch/x86/kvm/x86.c | 32 +++++++++++++++++++-----------
> > > > > include/linux/kvm_host.h | 3 +-
> > > > > virt/kvm/eventfd.c | 2 -
> > > > > virt/kvm/ioapic.c | 47 +++++++++++++++++++++++++++------------------
> > > > > virt/kvm/ioapic.h | 1 +
> > > > > virt/kvm/irq_comm.c | 35 ++++++++++++++-------------------
> > > > > virt/kvm/kvm_main.c | 7 ++---
> > > > > 10 files changed, 92 insertions(+), 75 deletions(-)
> > > > >
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > >
> > > --
> > > Gleb.
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> Gleb.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 0/5][RFC] more fine grained locking for IRQ injection
2009-07-13 14:21 ` Michael S. Tsirkin
@ 2009-07-13 14:33 ` Gleb Natapov
2009-07-13 14:43 ` Michael S. Tsirkin
0 siblings, 1 reply; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 14:33 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 05:21:48PM +0300, Michael S. Tsirkin wrote:
> On Mon, Jul 13, 2009 at 04:58:58PM +0300, Gleb Natapov wrote:
> > On Mon, Jul 13, 2009 at 04:53:15PM +0300, Michael S. Tsirkin wrote:
> > > On Mon, Jul 13, 2009 at 04:28:09PM +0300, Gleb Natapov wrote:
> > > > On Mon, Jul 13, 2009 at 04:23:36PM +0300, Michael S. Tsirkin wrote:
> > > > > On Mon, Jul 13, 2009 at 12:12:30PM +0300, Gleb Natapov wrote:
> > > > > > kvm->irq_lock protects too much stuff, but still fail to protect
> > > > > > everything it was design to protect (see ack notifiers call in pic). I
> > > > > > want to make IRQ injection logic use more fine grained locking.
> > > > >
> > > > > At least irq routing changes and notifier list changes
> > > > > do not seem to be involved in irq injection.
> > > > > So why do we want fine-grained locking there?
> > > > >
> > > > When you have one big lock and you want to eliminate it you look at all
> > > > things it protects and you start introducing different locking for
> > > > unrelated stuff. This is what this patch series does, so I don't really
> > > > get you point.
> > >
> > > But why do you want to eliminate it? What is the operation that this
> > > will speed up? Is there a contention issue and which operations
> > > contend on the lock?
> > >
> > Interrupt injection. I want to be able to inject interrupts without the
> > lock at all.
>
> You are talking about the lapic changes below, right? OK. But how do
> the other changes help, specifically? I'm less familiar with ioapic, but
> I think irq routing and irq notifier list changes are done off data
> path.
>
irq_routing is accessed on irq injection path, so making in lockless is
necessary to achieve the goal. Making ack notifiers lockless removes one
lock from EOI path.
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 0/5][RFC] more fine grained locking for IRQ injection
2009-07-13 14:33 ` Gleb Natapov
@ 2009-07-13 14:43 ` Michael S. Tsirkin
2009-07-13 15:21 ` Gleb Natapov
0 siblings, 1 reply; 34+ messages in thread
From: Michael S. Tsirkin @ 2009-07-13 14:43 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 05:33:12PM +0300, Gleb Natapov wrote:
> On Mon, Jul 13, 2009 at 05:21:48PM +0300, Michael S. Tsirkin wrote:
> > On Mon, Jul 13, 2009 at 04:58:58PM +0300, Gleb Natapov wrote:
> > > On Mon, Jul 13, 2009 at 04:53:15PM +0300, Michael S. Tsirkin wrote:
> > > > On Mon, Jul 13, 2009 at 04:28:09PM +0300, Gleb Natapov wrote:
> > > > > On Mon, Jul 13, 2009 at 04:23:36PM +0300, Michael S. Tsirkin wrote:
> > > > > > On Mon, Jul 13, 2009 at 12:12:30PM +0300, Gleb Natapov wrote:
> > > > > > > kvm->irq_lock protects too much stuff, but still fail to protect
> > > > > > > everything it was design to protect (see ack notifiers call in pic). I
> > > > > > > want to make IRQ injection logic use more fine grained locking.
> > > > > >
> > > > > > At least irq routing changes and notifier list changes
> > > > > > do not seem to be involved in irq injection.
> > > > > > So why do we want fine-grained locking there?
> > > > > >
> > > > > When you have one big lock and you want to eliminate it you look at all
> > > > > things it protects and you start introducing different locking for
> > > > > unrelated stuff. This is what this patch series does, so I don't really
> > > > > get you point.
> > > >
> > > > But why do you want to eliminate it? What is the operation that this
> > > > will speed up? Is there a contention issue and which operations
> > > > contend on the lock?
> > > >
> > > Interrupt injection. I want to be able to inject interrupts without the
> > > lock at all.
> >
> > You are talking about the lapic changes below, right? OK. But how do
> > the other changes help, specifically? I'm less familiar with ioapic, but
> > I think irq routing and irq notifier list changes are done off data
> > path.
> >
> irq_routing is accessed on irq injection path, so making in lockless is
> necessary to achieve the goal. Making ack notifiers lockless removes one
> lock from EOI path.
But this is done with RCU patches, right?
--
MST
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 0/5][RFC] more fine grained locking for IRQ injection
2009-07-13 14:43 ` Michael S. Tsirkin
@ 2009-07-13 15:21 ` Gleb Natapov
0 siblings, 0 replies; 34+ messages in thread
From: Gleb Natapov @ 2009-07-13 15:21 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: kvm, avi, mtosatti
On Mon, Jul 13, 2009 at 05:43:15PM +0300, Michael S. Tsirkin wrote:
> On Mon, Jul 13, 2009 at 05:33:12PM +0300, Gleb Natapov wrote:
> > On Mon, Jul 13, 2009 at 05:21:48PM +0300, Michael S. Tsirkin wrote:
> > > On Mon, Jul 13, 2009 at 04:58:58PM +0300, Gleb Natapov wrote:
> > > > On Mon, Jul 13, 2009 at 04:53:15PM +0300, Michael S. Tsirkin wrote:
> > > > > On Mon, Jul 13, 2009 at 04:28:09PM +0300, Gleb Natapov wrote:
> > > > > > On Mon, Jul 13, 2009 at 04:23:36PM +0300, Michael S. Tsirkin wrote:
> > > > > > > On Mon, Jul 13, 2009 at 12:12:30PM +0300, Gleb Natapov wrote:
> > > > > > > > kvm->irq_lock protects too much stuff, but still fail to protect
> > > > > > > > everything it was design to protect (see ack notifiers call in pic). I
> > > > > > > > want to make IRQ injection logic use more fine grained locking.
> > > > > > >
> > > > > > > At least irq routing changes and notifier list changes
> > > > > > > do not seem to be involved in irq injection.
> > > > > > > So why do we want fine-grained locking there?
> > > > > > >
> > > > > > When you have one big lock and you want to eliminate it you look at all
> > > > > > things it protects and you start introducing different locking for
> > > > > > unrelated stuff. This is what this patch series does, so I don't really
> > > > > > get you point.
> > > > >
> > > > > But why do you want to eliminate it? What is the operation that this
> > > > > will speed up? Is there a contention issue and which operations
> > > > > contend on the lock?
> > > > >
> > > > Interrupt injection. I want to be able to inject interrupts without the
> > > > lock at all.
> > >
> > > You are talking about the lapic changes below, right? OK. But how do
> > > the other changes help, specifically? I'm less familiar with ioapic, but
> > > I think irq routing and irq notifier list changes are done off data
> > > path.
> > >
> > irq_routing is accessed on irq injection path, so making in lockless is
> > necessary to achieve the goal. Making ack notifiers lockless removes one
> > lock from EOI path.
>
> But this is done with RCU patches, right?
>
No, not right. Only rcu patch is meaningless since irq injection is
still done under irq_lock. Look at the whole series, not each line by
itself.
--
Gleb.
^ permalink raw reply [flat|nested] 34+ messages in thread