xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for-4.5 v3] x86/hvm: remove stray lock release from hvm_ioreq_server_init()
@ 2014-09-26 14:20 Vitaly Kuznetsov
  2014-09-26 14:43 ` Paul Durrant
  0 siblings, 1 reply; 3+ messages in thread
From: Vitaly Kuznetsov @ 2014-09-26 14:20 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Jones, Paul Durrant, Ian Campbell, Jan Beulich

If HVM_PARAM_IOREQ_PFN, HVM_PARAM_BUFIOREQ_PFN, or HVM_PARAM_BUFIOREQ_EVTCHN
parameters are read when guest domain is dying it leads to the following
ASSERT:

(XEN) Assertion '_raw_spin_is_locked(lock)' failed at ...workspace/KERNEL/xen/xen/include/asm/spinlock.h:18
(XEN) ----[ Xen-4.5-unstable  x86_64  debug=y  Not tainted ]----
...
(XEN) Xen call trace:
(XEN)    [<ffff82d08012b07f>] _spin_unlock+0x27/0x30
(XEN)    [<ffff82d0801b6103>] hvm_create_ioreq_server+0x3df/0x49a
(XEN)    [<ffff82d0801bcceb>] do_hvm_op+0x12bf/0x27a0
(XEN)    [<ffff82d08022b9bb>] syscall_enter+0xeb/0x145

The root cause of this issue is the fact that ioreq_server.lock is being
released twice - first in hvm_ioreq_server_init() and then in hvm_create_ioreq_server().
Drop the lock release from hvm_ioreq_server_init() as we don't take it here, do minor
label cleanup.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
Changes from v1:
- Instead of protecting agains creating ioreq server while guest domain
  is dying remove stray ioreq_server.lock lock release
  from hvm_ioreq_server_init(). Rename the patch accordingly.
  [Paul Durrant]

Changes from v2:
- Cleanup labels in hvm_ioreq_server_init(), shorten patch name
  [Jan Beulich]
---
 xen/arch/x86/hvm/hvm.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 0a20cbe..94c58e1 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -970,30 +970,28 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, struct domain *d,
 
     rc = hvm_ioreq_server_alloc_rangesets(s, is_default);
     if ( rc )
-        goto fail1;
+        return rc;
 
     rc = hvm_ioreq_server_map_pages(s, is_default, handle_bufioreq);
     if ( rc )
-        goto fail2;
+        goto fail_map;
 
     for_each_vcpu ( d, v )
     {
         rc = hvm_ioreq_server_add_vcpu(s, is_default, v);
         if ( rc )
-            goto fail3;
+            goto fail_add;
     }
 
     return 0;
 
- fail3:
+ fail_add:
     hvm_ioreq_server_remove_all_vcpus(s);
     hvm_ioreq_server_unmap_pages(s, is_default);
 
- fail2:
+ fail_map:
     hvm_ioreq_server_free_rangesets(s, is_default);
 
- fail1:
-    spin_unlock(&d->arch.hvm_domain.ioreq_server.lock);
     return rc;
 }
 
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH for-4.5 v3] x86/hvm: remove stray lock release from hvm_ioreq_server_init()
  2014-09-26 14:20 [PATCH for-4.5 v3] x86/hvm: remove stray lock release from hvm_ioreq_server_init() Vitaly Kuznetsov
@ 2014-09-26 14:43 ` Paul Durrant
  2014-09-26 17:35   ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 3+ messages in thread
From: Paul Durrant @ 2014-09-26 14:43 UTC (permalink / raw)
  To: Vitaly Kuznetsov, xen-devel@lists.xenproject.org
  Cc: Andrew Jones, Ian Campbell, Jan Beulich

> -----Original Message-----
> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
> Sent: 26 September 2014 15:21
> To: xen-devel@lists.xenproject.org
> Cc: Paul Durrant; Ian Campbell; Jan Beulich; Andrew Jones
> Subject: [PATCH for-4.5 v3] x86/hvm: remove stray lock release from
> hvm_ioreq_server_init()
> 
> If HVM_PARAM_IOREQ_PFN, HVM_PARAM_BUFIOREQ_PFN, or
> HVM_PARAM_BUFIOREQ_EVTCHN
> parameters are read when guest domain is dying it leads to the following
> ASSERT:
> 
> (XEN) Assertion '_raw_spin_is_locked(lock)' failed at
> ...workspace/KERNEL/xen/xen/include/asm/spinlock.h:18
> (XEN) ----[ Xen-4.5-unstable  x86_64  debug=y  Not tainted ]----
> ...
> (XEN) Xen call trace:
> (XEN)    [<ffff82d08012b07f>] _spin_unlock+0x27/0x30
> (XEN)    [<ffff82d0801b6103>] hvm_create_ioreq_server+0x3df/0x49a
> (XEN)    [<ffff82d0801bcceb>] do_hvm_op+0x12bf/0x27a0
> (XEN)    [<ffff82d08022b9bb>] syscall_enter+0xeb/0x145
> 
> The root cause of this issue is the fact that ioreq_server.lock is being
> released twice - first in hvm_ioreq_server_init() and then in
> hvm_create_ioreq_server().
> Drop the lock release from hvm_ioreq_server_init() as we don't take it here,
> do minor
> label cleanup.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>

Looks good to me.

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
> Changes from v1:
> - Instead of protecting agains creating ioreq server while guest domain
>   is dying remove stray ioreq_server.lock lock release
>   from hvm_ioreq_server_init(). Rename the patch accordingly.
>   [Paul Durrant]
> 
> Changes from v2:
> - Cleanup labels in hvm_ioreq_server_init(), shorten patch name
>   [Jan Beulich]
> ---
>  xen/arch/x86/hvm/hvm.c | 12 +++++-------
>  1 file changed, 5 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 0a20cbe..94c58e1 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -970,30 +970,28 @@ static int hvm_ioreq_server_init(struct
> hvm_ioreq_server *s, struct domain *d,
> 
>      rc = hvm_ioreq_server_alloc_rangesets(s, is_default);
>      if ( rc )
> -        goto fail1;
> +        return rc;
> 
>      rc = hvm_ioreq_server_map_pages(s, is_default, handle_bufioreq);
>      if ( rc )
> -        goto fail2;
> +        goto fail_map;
> 
>      for_each_vcpu ( d, v )
>      {
>          rc = hvm_ioreq_server_add_vcpu(s, is_default, v);
>          if ( rc )
> -            goto fail3;
> +            goto fail_add;
>      }
> 
>      return 0;
> 
> - fail3:
> + fail_add:
>      hvm_ioreq_server_remove_all_vcpus(s);
>      hvm_ioreq_server_unmap_pages(s, is_default);
> 
> - fail2:
> + fail_map:
>      hvm_ioreq_server_free_rangesets(s, is_default);
> 
> - fail1:
> -    spin_unlock(&d->arch.hvm_domain.ioreq_server.lock);
>      return rc;
>  }
> 
> --
> 1.9.3

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH for-4.5 v3] x86/hvm: remove stray lock release from hvm_ioreq_server_init()
  2014-09-26 14:43 ` Paul Durrant
@ 2014-09-26 17:35   ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 3+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-09-26 17:35 UTC (permalink / raw)
  To: Paul Durrant
  Cc: xen-devel@lists.xenproject.org, Vitaly Kuznetsov, Andrew Jones,
	Jan Beulich, Ian Campbell

On Fri, Sep 26, 2014 at 02:43:38PM +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com]
> > Sent: 26 September 2014 15:21
> > To: xen-devel@lists.xenproject.org
> > Cc: Paul Durrant; Ian Campbell; Jan Beulich; Andrew Jones
> > Subject: [PATCH for-4.5 v3] x86/hvm: remove stray lock release from
> > hvm_ioreq_server_init()
> > 
> > If HVM_PARAM_IOREQ_PFN, HVM_PARAM_BUFIOREQ_PFN, or
> > HVM_PARAM_BUFIOREQ_EVTCHN
> > parameters are read when guest domain is dying it leads to the following
> > ASSERT:
> > 
> > (XEN) Assertion '_raw_spin_is_locked(lock)' failed at
> > ...workspace/KERNEL/xen/xen/include/asm/spinlock.h:18
> > (XEN) ----[ Xen-4.5-unstable  x86_64  debug=y  Not tainted ]----
> > ...
> > (XEN) Xen call trace:
> > (XEN)    [<ffff82d08012b07f>] _spin_unlock+0x27/0x30
> > (XEN)    [<ffff82d0801b6103>] hvm_create_ioreq_server+0x3df/0x49a
> > (XEN)    [<ffff82d0801bcceb>] do_hvm_op+0x12bf/0x27a0
> > (XEN)    [<ffff82d08022b9bb>] syscall_enter+0xeb/0x145
> > 
> > The root cause of this issue is the fact that ioreq_server.lock is being
> > released twice - first in hvm_ioreq_server_init() and then in
> > hvm_create_ioreq_server().
> > Drop the lock release from hvm_ioreq_server_init() as we don't take it here,
> > do minor
> > label cleanup.
> > 
> > Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> 
> Looks good to me.
> 
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

And Released-Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> > ---
> > Changes from v1:
> > - Instead of protecting agains creating ioreq server while guest domain
> >   is dying remove stray ioreq_server.lock lock release
> >   from hvm_ioreq_server_init(). Rename the patch accordingly.
> >   [Paul Durrant]
> > 
> > Changes from v2:
> > - Cleanup labels in hvm_ioreq_server_init(), shorten patch name
> >   [Jan Beulich]
> > ---
> >  xen/arch/x86/hvm/hvm.c | 12 +++++-------
> >  1 file changed, 5 insertions(+), 7 deletions(-)
> > 
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 0a20cbe..94c58e1 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -970,30 +970,28 @@ static int hvm_ioreq_server_init(struct
> > hvm_ioreq_server *s, struct domain *d,
> > 
> >      rc = hvm_ioreq_server_alloc_rangesets(s, is_default);
> >      if ( rc )
> > -        goto fail1;
> > +        return rc;
> > 
> >      rc = hvm_ioreq_server_map_pages(s, is_default, handle_bufioreq);
> >      if ( rc )
> > -        goto fail2;
> > +        goto fail_map;
> > 
> >      for_each_vcpu ( d, v )
> >      {
> >          rc = hvm_ioreq_server_add_vcpu(s, is_default, v);
> >          if ( rc )
> > -            goto fail3;
> > +            goto fail_add;
> >      }
> > 
> >      return 0;
> > 
> > - fail3:
> > + fail_add:
> >      hvm_ioreq_server_remove_all_vcpus(s);
> >      hvm_ioreq_server_unmap_pages(s, is_default);
> > 
> > - fail2:
> > + fail_map:
> >      hvm_ioreq_server_free_rangesets(s, is_default);
> > 
> > - fail1:
> > -    spin_unlock(&d->arch.hvm_domain.ioreq_server.lock);
> >      return rc;
> >  }
> > 
> > --
> > 1.9.3
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2014-09-26 17:36 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-09-26 14:20 [PATCH for-4.5 v3] x86/hvm: remove stray lock release from hvm_ioreq_server_init() Vitaly Kuznetsov
2014-09-26 14:43 ` Paul Durrant
2014-09-26 17:35   ` Konrad Rzeszutek Wilk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).