From mboxrd@z Thu Jan 1 00:00:00 1970 From: Kevin Wolf Subject: Re: [PATCH] KVM-test: Add a ENOSPC subtest Date: Fri, 14 Jan 2011 09:27:54 +0100 Message-ID: <4D30090A.2070908@redhat.com> References: <794340721.80981.1294982903006.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Lucas Meneghel Rodrigues , autotest@test.kernel.org, kvm@vger.kernel.org To: Amos Kong Return-path: Received: from mx1.redhat.com ([209.132.183.28]:32078 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751439Ab1ANI0c (ORCPT ); Fri, 14 Jan 2011 03:26:32 -0500 In-Reply-To: <794340721.80981.1294982903006.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Am 14.01.2011 06:28, schrieb Amos Kong: > ----- Original Message ----- >> From: Amos Kong >> >> KVM guest always pauses on NOSPACE error, this test >> just repeatedly extend guest disk space and resume guest >> from paused status. >> >> Changes from v2: >> - Oops! Forgot to update tests_base.cfg.sample >> >> Changes from v1: >> - Use the most current KVM test API >> - Use the autotest API for external commands execution >> - Instead of chaining multiple shell commands as pre and >> post commands, create proper pre and post scripts for the >> test, as it is easier to figure out problems >> - Instead of setting up /dev/loop0 hardcoded by default, >> find the first available loop device before and use it. >> >> Signed-off-by: Amos Kong > > Thank you, Lucas > I've retested this patch, it's ok. > > BTW, hi Kevin, when I check the images during this test, got the following output, is it harmful? > > Leaked cluster 60796 refcount=1 reference=0 > Leaked cluster 60797 refcount=1 reference=0 > Leaked cluster 60798 refcount=1 reference=0 > Leaked cluster 60799 refcount=1 reference=0 > Leaked cluster 60800 refcount=1 reference=0 > Leaked cluster 60801 refcount=1 reference=0 > Leaked cluster 60802 refcount=1 reference=0 > Leaked cluster 60803 refcount=1 reference=0 > Leaked cluster 60804 refcount=1 reference=0 > Leaked cluster 60805 refcount=1 reference=0 > Leaked cluster 60806 refcount=1 reference=0 > Leaked cluster 60807 refcount=1 reference=0 > Leaked cluster 63982 refcount=1 reference=0 > Leaked cluster 63983 refcount=1 reference=0 > Leaked cluster 63984 refcount=1 reference=0 > Leaked cluster 63985 refcount=1 reference=0 > Leaked cluster 63986 refcount=1 reference=0 > Leaked cluster 63987 refcount=1 reference=0 > Leaked cluster 63988 refcount=1 reference=0 > Leaked cluster 63989 refcount=1 reference=0 > Leaked cluster 63990 refcount=1 reference=0 > Leaked cluster 63991 refcount=1 reference=0 > Leaked cluster 63992 refcount=1 reference=0 > Leaked cluster 63993 refcount=1 reference=0 > Leaked cluster 63994 refcount=1 reference=0 > Leaked cluster 63995 refcount=1 reference=0 > Leaked cluster 63996 refcount=1 reference=0 > Leaked cluster 63997 refcount=1 reference=0 > Leaked cluster 63998 refcount=1 reference=0 > Leaked cluster 63999 refcount=1 reference=0 > > 867 leaked clusters were found on the image. > This means waste of disk space, but no harm to data. I suppose the last two lines of the output answer your question. ;-) With I/O errors or qemu/host crashes, cluster leaks are fully expected. Kevin