From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: KVM, SCSI & OpenSolaris Date: Thu, 10 Jul 2008 15:50:41 -0500 Message-ID: <48767621.4070805@codemonkey.ws> References: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org To: james Return-path: Received: from py-out-1112.google.com ([64.233.166.179]:59851 "EHLO py-out-1112.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753514AbYGJUvI (ORCPT ); Thu, 10 Jul 2008 16:51:08 -0400 Received: by py-out-1112.google.com with SMTP id p76so1960381pyb.10 for ; Thu, 10 Jul 2008 13:51:07 -0700 (PDT) In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: james wrote: > The Use Case I am trying to achieve: > 1. I have an AMD X4 with 4GB ram that is the central server. Given the > level of horse power it has the server does multiple roles, email, web > server, mythbuntu backend and file server. > The base/host OS is Ubuntu 8.04 using standard packages so I am using > KVM-62. While I am prepared to uninstall this and move to a self > compiled KVM-70 I want to determine if that move is worth it based on > my target outcome, or if I need to move to an alternative VM solution. > > 2. The desire is to get the file server role moved from running LVM to > ZFS raidz to allow "easy" upgrades of disk size on a disk by disk > basis (not available as an option under normal raid 5). e.g. pull out > a single 320GB disk and put in a 500GB or 750GB disk into the raidz > and it all "just works" still with the extra storage being available. > > 3. Looking to achieve this move by using a VM running either > OpenSolaris or Nexenta. The idea being to have the VM as a NAS setup > to be using the disks directly as block devices. So the setup is to > have a a boot img (can be IDE) and 4 other direct access block devices > (need to be SCSI as there are not enough IDE devices available). Not > these are all 64 bit installs based on the advice that ZFS needs a 64 > bit OS to behave well. > > 4. Options tried: > a] I have tried using FreeBSD 7 using ZFS under this VM model. However > when put it under load I get scsi errors an the VM segment > faults/core-dumps. This is > b] I have been trying to get OpenSolaris and Nexenta (basically the > same at the core) working but neither recognise the KVM scsi > controler. It seems to coming through as id PCI1000,12 which is a > LSI53C895A PCI to Ultra2 SCSI Controller which is supposed to use the > symhisl driver. Now from good old Google I have found that there are > supposed to be problems with this driver and it will not be ported to > 64 bit. Indeed looking at the OpenSolaris /etc/driver_aliases this > driver to PCI mapping has been dropped. > > 5. So the questions are: > a] How stable/robust is the scsi implementation under KVM? i.e. are > there known weaknesses here that moving to higher KVM versions will > address such that using FreeBSD 7 will be a viable option. It hasn't been tested much (if at all) with the FreeBSD 7 drivers. In fact, I don't think there's been much testing at all of FreeBSD as a guest in KVM. > b] It would appear that KVM has a general exposure in the SCSI space > for OpenSolaris and its variants. Due to dropped driver support the > current SCSI implementation on KVM will no longer work with > OpenSolaris (at least in its 64bit variant). Or is this resolved in > later KVM versions (after KVM-62)? I don't quite following what you are saying. Are you saying that OpenSolaris no longer supports the SCSI card we emulate? That seems unfortunate on their part. I would also be surprised by that since the same SCSI implementation is used by Xen and Sun is heavily invested in OpenSolaris for Xen at this point. Regards, Anthony Liguori > All help and suggestions gratefully received. > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html