From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53972) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V7oEh-0006cU-MA for qemu-devel@nongnu.org; Fri, 09 Aug 2013 11:04:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1V7oEa-0003jp-Vs for qemu-devel@nongnu.org; Fri, 09 Aug 2013 11:04:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:4902) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V7oEa-0003ha-Ng for qemu-devel@nongnu.org; Fri, 09 Aug 2013 11:04:08 -0400 Date: Fri, 9 Aug 2013 17:03:54 +0200 From: Stefan Hajnoczi Message-ID: <20130809150353.GA9270@stefanha-thinkpad.redhat.com> References: <5204B4B8.3080302@filoo.de> <13653691.7559.1376057121351.JavaMail.andrei@finka> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <13653691.7559.1376057121351.JavaMail.andrei@finka> Subject: Re: [Qemu-devel] [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to kernel_hung_tasks_timout_secs message and unresponsive qemu-process, [Bug 1207686] List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Andrei Mikhailovsky Cc: Josh Durgin , ceph-users@lists.ceph.com, Oliver Francke , Mike Dawson , qemu-devel@nongnu.org On Fri, Aug 09, 2013 at 03:05:22PM +0100, Andrei Mikhailovsky wrote: > I can confirm that I am having similar issues with ubuntu vm guests using fio with bs=4k direct=1 numjobs=4 iodepth=16. Occasionally i see hang tasks, occasionally guest vm stops responding without leaving anything in the logs and sometimes i see kernel panic on the console. I typically leave the runtime of the fio test for 60 minutes and it tends to stop responding after about 10-30 mins. > > I am on ubuntu 12.04 with 3.5 kernel backport and using ceph 0.61.7 with qemu 1.5.0 and libvirt 1.0.2 Josh, In addition to the Ceph logs you can also use QEMU tracing with the following events enabled: virtio_blk_handle_write virtio_blk_handle_read virtio_blk_rw_complete See docs/tracing.txt for details on usage. Inspecting the trace output will let you observe the I/O request submission/completion from the virtio-blk device perspective. You'll be able to see whether requests are never being completed in some cases. This bug seems like a corner case or race condition since most requests seem to complete just fine. The problem is that eventually the virtio-blk device becomes unusable when it runs out of descriptors (it has 128). And before that limit is reached the guest may become unusable due to the hung I/O requests. Stefan