From mboxrd@z Thu Jan 1 00:00:00 1970 From: Hans de Bruin Subject: Re: just a dump Date: Wed, 27 May 2009 09:43:01 +0200 Message-ID: <4A1CEF05.6010203@xs4all.nl> References: <4A09E620.3040300@xs4all.nl> <4A09F62A.8010203@xs4all.nl> <20090515144923.GA6304@amt.cnet> <4A0E7B81.6070203@xs4all.nl> <1242913886.12727.5.camel@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , kvm@vger.kernel.org To: Lucas Meneghel Rodrigues Return-path: Received: from smtp-cloud1.xs4all.nl ([194.109.24.61]:24026 "EHLO smtp-cloud1.xs4all.nl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1764198AbZE0Hla (ORCPT ); Wed, 27 May 2009 03:41:30 -0400 In-Reply-To: <1242913886.12727.5.camel@localhost.localdomain> Sender: kvm-owner@vger.kernel.org List-ID: Lucas Meneghel Rodrigues wrote: > On Sat, 2009-05-16 at 10:38 +0200, Hans de Bruin wrote: >> Marcelo Tosatti wrote: >>> On Wed, May 13, 2009 at 12:20:26AM +0200, Hans de Bruin wrote: >>>> Hans de Bruin wrote: >>>>> Staring to vms simultaneously end in crash >>>>> >>>>> linux 30-rc5 >>>>> kvm-qemu kvm-85-378-g143eb2b >>>>> proc AMD dualcore >>>>> >>>>> vm's like: >>>>> >>>>> #!/bin/sh >>>>> n=10 >>>>> cdrom=/iso/server2008x64.iso >>>>> drive=file=/kvm/disks/vm$n >>>>> mem=1024 >>>>> cpu=qemu64 >>>>> vga=std >>>>> mac=52:54:00:12:34:$n >>>>> bridge=br1 >>>>> >>>>> qemu-system-x86_64 -cdrom $cdrom -drive $drive -m $mem -cpu $cpu -vga >>>>> $vga -net nic,macaddr=$mac -net tap,script=/etc/qemu/$bridge >>>>> >>>>> >>>> another dmesg: >>> Hans, >>> >>> The oopses below point to the possibility of a hardware problem, >>> similar to: >>> >>> https://bugzilla.redhat.com/show_bug.cgi?id=480779 >>> >>> Can you please rule it out with memtest86? >> I ran memtest for 11 hours and it completed 4.7 passes with no problems. >> But then memtest is about cpu and memmory interaction. If the problem is >> disk related there is also disk/chipset/dma and memmory interaction. I >> could degrade my system by turning dma on disk io off, or i could have a >> closer look at kvm-autotest. > > Hans: > > There is a memory test designed to stress disk/chipset/dma interaction. > I made an implementation of it on autotest, the test module is called > dma_memtest: > > http://autotest.kernel.org/browser/trunk/client/tests/dma_memtest/dma_memtest.py [09:09:47 INFO ] Test finished after 1 iterations. Memory test passed. [09:09:48 DEBUG] Running 'grep MemTotal /proc/meminfo' [09:09:48 DEBUG] Running 'rpm -qa' [09:09:49 INFO ] GOOD dma_memtest dma_memtest timestamp=1243408189 localtime=May 27 09:09:49 completed successfully [09:09:49 DEBUG] Persistent state variable __group_level now set to 1 [09:09:49 INFO ] END GOOD dma_memtest dma_memtest timestamp=1243408189 localtime=May 27 09:09:49 [09:09:49 DEBUG] Dropping caches [09:09:49 DEBUG] Running 'sync' [09:09:51 DEBUG] Running 'sync' [09:09:51 DEBUG] Running 'echo 3 > /proc/sys/vm/drop_caches' [09:09:52 DEBUG] Persistent state variable __group_level now set to 0 [09:09:52 INFO ] END GOOD ---- ---- timestamp=1243408192 localtime=May 27 09:09:52 Well that looks good. The web page talks about forcing the system to swap. That never happend swap usage is still 0 bytes. I installed autotest on the same lv (3 disk stripe) as the vmdisks. -- Hans