From mboxrd@z Thu Jan 1 00:00:00 1970 From: Vitaly Kuznetsov Subject: Re: [PATCH for-4.5] x86/hvm: do not create ioreq server if guest domain is dying Date: Fri, 26 Sep 2014 15:08:44 +0200 Message-ID: <877g0q5y1v.fsf@vitty.brq.redhat.com> References: <1411734159-25136-1-git-send-email-vkuznets@redhat.com> <9AAE0902D5BC7E449B7C8E4E778ABCD0493411@AMSPEX01CL01.citrite.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XXVGX-0005g7-7z for xen-devel@lists.xenproject.org; Fri, 26 Sep 2014 13:08:53 +0000 In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0493411@AMSPEX01CL01.citrite.net> (Paul Durrant's message of "Fri, 26 Sep 2014 12:29:54 +0000") List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Paul Durrant Cc: "xen-devel@lists.xenproject.org" , Andrew Jones , Ian Campbell , Jan Beulich List-Id: xen-devel@lists.xenproject.org Paul Durrant writes: >> -----Original Message----- >> From: Vitaly Kuznetsov [mailto:vkuznets@redhat.com] >> Sent: 26 September 2014 13:23 >> To: xen-devel@lists.xenproject.org >> Cc: Paul Durrant; Ian Campbell; Jan Beulich; Andrew Jones >> Subject: [PATCH for-4.5] x86/hvm: do not create ioreq server if guest domain >> is dying >> >> If HVM_PARAM_IOREQ_PFN, HVM_PARAM_BUFIOREQ_PFN, or >> HVM_PARAM_BUFIOREQ_EVTCHN >> parameters are read when guest domain is dying it leads to the following >> ASSERT: >> >> (XEN) Assertion '_raw_spin_is_locked(lock)' failed at >> ...workspace/KERNEL/xen/xen/include/asm/spinlock.h:18 >> (XEN) ----[ Xen-4.5-unstable x86_64 debug=y Not tainted ]---- >> ... >> (XEN) Xen call trace: >> (XEN) [] _spin_unlock+0x27/0x30 >> (XEN) [] hvm_create_ioreq_server+0x3df/0x49a >> (XEN) [] do_hvm_op+0x12bf/0x27a0 >> (XEN) [] syscall_enter+0xeb/0x145 >> >> It doesn't make sense (and is unsafe) to create ioreq server if we're dying. >> Make >> hvm_create_ioreq_server() fail with -EFAULT in this case. >> >> Signed-off-by: Vitaly Kuznetsov >> --- >> xen/arch/x86/hvm/hvm.c | 3 +++ >> 1 file changed, 3 insertions(+) >> >> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c >> index 0a20cbe..2cc6de7 100644 >> --- a/xen/arch/x86/hvm/hvm.c >> +++ b/xen/arch/x86/hvm/hvm.c >> @@ -1038,6 +1038,9 @@ static int hvm_create_ioreq_server(struct domain >> *d, domid_t domid, >> struct hvm_ioreq_server *s; >> int rc; >> >> + if ( d->is_dying ) >> + return -EFAULT; >> + > > Whilst I agree that it makes no sense to be creating in ioreq server in this case, this patch is not an actual fix for the bug. > > The bug AFAICT is actually a stray spin_unlock() in the fail1 error case in hvm_ioreq_server_init(). > Ah, yea, you're right, hvm_ioreq_server_init() is the root cause. I'll test and repost. Thanks! > Paul > >> rc = -ENOMEM; >> s = xzalloc(struct hvm_ioreq_server); >> if ( !s ) >> -- >> 1.9.3 -- Vitaly