* Stable kvm version ?
@ 2008-07-04 13:22 Slohm Gadaburi
2008-07-05 1:57 ` David Mair
0 siblings, 1 reply; 5+ messages in thread
From: Slohm Gadaburi @ 2008-07-04 13:22 UTC (permalink / raw)
To: kvm
Hi all,
I found out I can't use Ubuntu's kvm package because it doesn't
support vm snapshots.
I am going to use a vanilla kvm and was wondering which version do you
recommend me to use
(my biggest concern is stability) ?
Thank you!
Slohm
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Stable kvm version ?
2008-07-04 13:22 Stable kvm version ? Slohm Gadaburi
@ 2008-07-05 1:57 ` David Mair
2008-07-07 19:05 ` Freddie Cash
0 siblings, 1 reply; 5+ messages in thread
From: David Mair @ 2008-07-05 1:57 UTC (permalink / raw)
Cc: kvm
Slohm Gadaburi wrote:
> Hi all,
>
> I found out I can't use Ubuntu's kvm package because it doesn't
> support vm snapshots.
>
> I am going to use a vanilla kvm and was wondering which version do you
> recommend me to use
> (my biggest concern is stability) ?
I have no stability problems with a mix of Windows and Linux guests
using kvm-70 on a x86_64 kernel 2.6.22.18. I've had one Linux guest up
all of the past week while testing something. YMMV.
---
David.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Stable kvm version ?
2008-07-05 1:57 ` David Mair
@ 2008-07-07 19:05 ` Freddie Cash
2008-07-09 0:16 ` David S. Ahern
0 siblings, 1 reply; 5+ messages in thread
From: Freddie Cash @ 2008-07-07 19:05 UTC (permalink / raw)
To: kvm
On Fri, Jul 4, 2008 at 6:57 PM, David Mair <dmair@mair-family.org> wrote:
> Slohm Gadaburi wrote:
>> I found out I can't use Ubuntu's kvm package because it doesn't
>> support vm snapshots.
>>
>> I am going to use a vanilla kvm and was wondering which version do you
>> recommend me to use
>> (my biggest concern is stability) ?
>
> I have no stability problems with a mix of Windows and Linux guests using
> kvm-70 on a x86_64 kernel 2.6.22.18. I've had one Linux guest up all of the
> past week while testing something. YMMV.
I have no stability issues with kvm-69 on 64-bit Debian Lenny with
kernel 2.6.24, using the kvm-amd module from the kernel package, when
using the rtl8139 NIC.
I can lock up any of my VMs when using the e1000 NIC and doing massive
data transfers (rsync, scp, wget), in Debian (Etch/Lenny), Windows XP
(SP2/SP3), or FreeBSD (6.3/7.0) guests. And also when using the
virtio NIC or block drivers in Debian Lenny guests. Haven't tracked
down what causes the problem, or how to reliably cause it to happen
(sometimes right away, sometimes it's fine for a week), which is why I
haven't posted any bug reports on it as yet.
For now, all my VMs are using emulated NICs and block devices.
--
Freddie Cash
fjwcash@gmail.com
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Stable kvm version ?
2008-07-07 19:05 ` Freddie Cash
@ 2008-07-09 0:16 ` David S. Ahern
2008-07-09 17:51 ` Freddie Cash
0 siblings, 1 reply; 5+ messages in thread
From: David S. Ahern @ 2008-07-09 0:16 UTC (permalink / raw)
To: Freddie Cash; +Cc: kvm
[-- Attachment #1: Type: text/plain, Size: 2340 bytes --]
There's a bug opened for the network lockups -- see
http://sourceforge.net/tracker/index.php?func=detail&aid=1802082&group_id=180599&atid=893831
Based on my testing I've found that the e1000 has the lowest overhead
(e.g., lowest irq and softirq times in the guest). I have not seen any
lockups with the network using the e1000 nic, and a couple of months ago
I was able to run a reasonably intensive network load continuously for
several days.
However, the duration tests I've run were with a modified BIOS. Months
ago when I was digging into the network lockups I was comparing
interrupt allocations to a DL320G3 running a RHEL3/4 load natively. I
noticed no interrupts were shared on bare hardware, while in my RHEL3/4
based kvm guests I was seeing interrupt sharing. So, I patched the bios
(see attached) to get a different usage.
I have not had time to do the due diligence to see if the stability was
due to kvm updates or my bios change. If you have the time I'd be
interested in knowing how the bios change works for you -- if you still
see lockups.
david
Freddie Cash wrote:
> On Fri, Jul 4, 2008 at 6:57 PM, David Mair <dmair@mair-family.org> wrote:
>> Slohm Gadaburi wrote:
>>> I found out I can't use Ubuntu's kvm package because it doesn't
>>> support vm snapshots.
>>>
>>> I am going to use a vanilla kvm and was wondering which version do you
>>> recommend me to use
>>> (my biggest concern is stability) ?
>> I have no stability problems with a mix of Windows and Linux guests using
>> kvm-70 on a x86_64 kernel 2.6.22.18. I've had one Linux guest up all of the
>> past week while testing something. YMMV.
>
> I have no stability issues with kvm-69 on 64-bit Debian Lenny with
> kernel 2.6.24, using the kvm-amd module from the kernel package, when
> using the rtl8139 NIC.
>
> I can lock up any of my VMs when using the e1000 NIC and doing massive
> data transfers (rsync, scp, wget), in Debian (Etch/Lenny), Windows XP
> (SP2/SP3), or FreeBSD (6.3/7.0) guests. And also when using the
> virtio NIC or block drivers in Debian Lenny guests. Haven't tracked
> down what causes the problem, or how to reliably cause it to happen
> (sometimes right away, sometimes it's fine for a week), which is why I
> haven't posted any bug reports on it as yet.
>
> For now, all my VMs are using emulated NICs and block devices.
[-- Attachment #2: pci_irq.patch --]
[-- Type: text/x-patch, Size: 768 bytes --]
--- bios/rombios32.c.orig 2008-06-17 07:36:35.000000000 -0600
+++ bios/rombios32.c 2008-06-17 07:37:02.000000000 -0600
@@ -619,21 +619,21 @@
typedef struct PCIDevice {
int bus;
int devfn;
} PCIDevice;
static uint32_t pci_bios_io_addr;
static uint32_t pci_bios_mem_addr;
static uint32_t pci_bios_bigmem_addr;
/* host irqs corresponding to PCI irqs A-D */
-static uint8_t pci_irqs[4] = { 10, 10, 11, 11 };
+static uint8_t pci_irqs[4] = { 10, 11, 7, 3 };
static PCIDevice i440_pcidev;
static void pci_config_writel(PCIDevice *d, uint32_t addr, uint32_t val)
{
outl(0xcf8, 0x80000000 | (d->bus << 16) | (d->devfn << 8) | (addr & 0xfc));
outl(0xcfc, val);
}
static void pci_config_writew(PCIDevice *d, uint32_t addr, uint32_t val)
{
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Stable kvm version ?
2008-07-09 0:16 ` David S. Ahern
@ 2008-07-09 17:51 ` Freddie Cash
0 siblings, 0 replies; 5+ messages in thread
From: Freddie Cash @ 2008-07-09 17:51 UTC (permalink / raw)
To: kvm
On Tue, Jul 8, 2008 at 5:16 PM, David S. Ahern <daahern@cisco.com> wrote:
> There's a bug opened for the network lockups -- see
> http://sourceforge.net/tracker/index.php?func=detail&aid=1802082&group_id=180599&atid=893831
>
> Based on my testing I've found that the e1000 has the lowest overhead
> (e.g., lowest irq and softirq times in the guest). I have not seen any
> lockups with the network using the e1000 nic, and a couple of months ago
> I was able to run a reasonably intensive network load continuously for
> several days.
>
> However, the duration tests I've run were with a modified BIOS. Months
> ago when I was digging into the network lockups I was comparing
> interrupt allocations to a DL320G3 running a RHEL3/4 load natively. I
> noticed no interrupts were shared on bare hardware, while in my RHEL3/4
> based kvm guests I was seeing interrupt sharing. So, I patched the bios
> (see attached) to get a different usage.
>
> I have not had time to do the due diligence to see if the stability was
> due to kvm updates or my bios change. If you have the time I'd be
> interested in knowing how the bios change works for you -- if you still
> see lockups.
This bug report is similar to the issue I'm seeing. In our case, I'm
booting off a 32-bit Knoppix 5.3 DVD ISO, mounting the virtual
partitions, and running rsync from another server on the network.
Everything is connected via gigabit NICs and switch ports.
Host has a kvmbr0 using bond0 as the physical interface. bond0
combines the 4 ports on an Intel PRO/1000MT PCIe NIC, using
mode=balance-tlb.
Host is running 64-bit Debian Lenny, with kvm-70 packages and 2.6.24
kernel, using the kvm/kvm-amd modules that ship with the kernel.
Hardware:
Tyan h2000M motherboard
2x dual-core Opteron 2220 CPUs at 2.8 GHz
8 GB ECC DDR2-667 SD-RAM (4 GB per socket)
12x 500 GB SATA-II HDs in RAID6
3Ware 9650-ML16 PCIe RAID controller
The guests are using -net tap.
Using rtl8139, I can run rsync until the cows come home (it runs
through cron twice a day, but I've done manual runs 6 times
back-to-back, to sync 400 GB of data).
Using e1000, the guest networking will die within minutes of starting
rsync, everytime. Won't last more than 15 minutes. ifdown/ifup eth0
will bring the link back to life, but the rsync process has to be
restarted.
Using virtio-net (booting the guest OS using kernel 2.6.24, not
Knoppix), the guest networking dies within minutes as well, but it
lasts a little longer than e1000, and is considerably faster.
Guests are started with:
/usr/bin/kvm -name webmail -smp 1 -m 3072 -vnc :05 -daemonize
-localtime -usb -usbdevice tablet -net
nic,macaddr=00:16:3e:00:00:05,model=rtl8139 -net tap,ifname=tap05
-pidfile /var/run/kvm/webmail.pid -boot d -no-reboot -drive
index=0,media=disk,if=ide,file=/dev/mapper/vol0-webmail--boot -drive
index=1,media=disk,if=ide,file=/dev/mapper/vol0-webmail--storage
-drive index=2,media=cdrom,if=ide,file=/home/iso/KNOPPIX_V5.3.1DVD-2008-03-26-EN.iso
Number of guests running doesn't make a difference, happens with just
one or all 6 running. But only the network for 1 guest dies at a
time.
--
Freddie Cash
fjwcash@gmail.com
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2008-07-09 17:51 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-07-04 13:22 Stable kvm version ? Slohm Gadaburi
2008-07-05 1:57 ` David Mair
2008-07-07 19:05 ` Freddie Cash
2008-07-09 0:16 ` David S. Ahern
2008-07-09 17:51 ` Freddie Cash
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox