From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ian Murray Subject: Re: Dom 0 crash Date: Tue, 05 Nov 2013 22:29:49 +0000 Message-ID: <5279715D.3010804@yahoo.co.uk> References: <1383652733.29798.YahooMailNeo@web171306.mail.ir2.yahoo.com> <5278FDA202000078000FF872@nat28.tlf.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1Vdp8C-0000WM-MA for xen-devel@lists.xenproject.org; Tue, 05 Nov 2013 22:29:52 +0000 In-Reply-To: <5278FDA202000078000FF872@nat28.tlf.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: xen-devel List-Id: xen-devel@lists.xenproject.org On 05/11/13 13:16, Jan Beulich wrote: >>>> On 05.11.13 at 12:58, Ian Murray wrote: >> I have a recurring crash using Xen 4.3.1-RC2 and Ubuntu 12.04 as Dom0 >> (3.2.0-55-generic). I have software RAID 5 with LVM's. DomU (also 12.04 >> Ubuntu 3.2.0-55 kernel) has a dedicated logical volume, which is being backed >> up shutting down the DomU, an LVM snapshot being created, restart of DomU and >> then the snapshot dd'ed to another logical volume. The snapshot is then >> removed and the second LV is dd'ed to gzip and onto DAT tape. >> >> I currently have this running every hour (unless its already running) for >> testing purposes. After 6-12 runs of this, the Dom0 kernel crashes with he >> below output. >> >> When I preform this booting into the same kernel standalone, the problem >> does not occur. > Likely because the action that triggers this doesn't get performed > in that case? Thanks for the response. I am obviously comparing apples and oranges, but I have tried to be as similar as possible in as much as I have limited kernel memory to 512M as I do with Dom0 and have used a background task writing /dev/urandom to the LV that the domU would normally be using. The only difference is that it isn't running under Xen and I don't have a domU running in the background. I will repeat the exercise with no domU running, but under Xen. >> Can anyone please suggest what I am doing wrong or identify if it is bug? > Considering that exception address ... > >> RIP: e030:[] [] scsi_dispatch_cmd+0x6d/0x2e0 > ... and call stack ... > >> [24149.786311] Call Trace: >> [24149.786315] >> [24149.786323] [] scsi_request_fn+0x3a2/0x470 >> [24149.786333] [] blk_run_queue+0x38/0x60 >> [24149.786339] [] scsi_run_queue+0xd6/0x1b0 >> [24149.786347] [] scsi_next_command+0x42/0x60 >> [24149.786354] [] scsi_io_completion+0x1b2/0x630 >> [24149.786363] [] ? _raw_spin_unlock_irqrestore+0x1e/0x30 >> [24149.786371] [] scsi_finish_command+0xcc/0x130 >> [24149.786378] [] scsi_softirq_done+0x13e/0x150 >> [24149.786386] [] blk_done_softirq+0x83/0xa0 >> [24149.786394] [] __do_softirq+0xa8/0x210 >> [24149.786402] [] call_softirq+0x1c/0x30 >> [24149.786410] [] do_softirq+0x65/0xa0 >> [24149.786416] [] irq_exit+0x8e/0xb0 >> [24149.786428] [] xen_evtchn_do_upcall+0x35/0x50 >> [24149.786436] [] xen_do_hypervisor_callback+0x1e/0x30 >> [24149.786441] >> [24149.786449] [] ? hypercall_page+0x3aa/0x1000 >> [24149.786456] [] ? hypercall_page+0x3aa/0x1000 >> [24149.786464] [] ? xen_safe_halt+0x10/0x20 >> [24149.786472] [] ? default_idle+0x53/0x1d0 >> [24149.786478] [] ? cpu_idle+0xd6/0x120 > ... point into the SCSI subsystem, this is likely the wrong list to > ask for help on. ... but the right list to confirm that I am on the wrong list? :) Seriously, the specific evidence may suggest it's a non-Xen issue/bug, but Xen is the only measurable/visible difference so far. I referred it to this list because here the demarcation between hypervisor, PVOPS and regular kernel code interaction is likely best understood. Thanks again for your response. > > Jan >