From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=48107 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OFD8a-0002qP-8q for qemu-devel@nongnu.org; Thu, 20 May 2010 17:18:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OFD8Y-00054W-BN for qemu-devel@nongnu.org; Thu, 20 May 2010 17:18:40 -0400 Received: from mail-wy0-f173.google.com ([74.125.82.173]:41930) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OFD8Y-00054D-4C for qemu-devel@nongnu.org; Thu, 20 May 2010 17:18:38 -0400 Received: by wye20 with SMTP id 20so230677wye.4 for ; Thu, 20 May 2010 14:18:34 -0700 (PDT) MIME-Version: 1.0 Sender: lists@brunner-muc.info In-Reply-To: References: <20100519192222.GD61706@ncolin.muc.de> Date: Thu, 20 May 2010 23:18:34 +0200 Message-ID: Subject: Re: [Qemu-devel] [RFC PATCH 1/1] ceph/rbd block driver for qemu-kvm From: Christian Brunner Content-Type: text/plain; charset=ISO-8859-1 List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Blue Swirl Cc: ceph-devel@vger.kernel.org, qemu-devel@nongnu.org, kvm@vger.kernel.org 2010/5/20 Blue Swirl : > On Wed, May 19, 2010 at 7:22 PM, Christian Brunner wrote: >> The attached patch is a block driver for the distributed file system >> Ceph (http://ceph.newdream.net/). This driver uses librados (which >> is part of the Ceph server) for direct access to the Ceph object >> store and is running entirely in userspace. Therefore it is >> called "rbd" - rados block device. >> >> To compile the driver a recent version of ceph (>= 0.20.1) is needed >> and you have to "--enable-rbd" when running configure. >> >> Additional information is available on the Ceph-Wiki: >> >> http://ceph.newdream.net/wiki/Kvm-rbd > > > I have no idea whether it makes sense to add Ceph (no objection > either). I have some minor comments below. Thanks for your comments. I'll send an updated patch in a few days. Having a central storage system is quite essential in larger hosting environments, it enables you to move your guest systems from one node to another easily (live-migration or dynamic restart). Traditionally this has been done using SAN, iSCSI or NFS. However most of these systems don't scale very well and and the costs for high-availability are quite high. With new approaches like Sheepdog or Ceph, things are getting a lot cheaper and you can scale your system without disrupting your service. The concepts are quite similar to what Amazon is doing in their EC2 environment, but they certainly won't publish it as OpenSource anytime soon. Both projects have advantages and disadvantages. Ceph is a bit more universal as it implements a whole filesystem. Sheepdog is more feature complete in regards of managing images (e.g. snapshots). Both projects require some additional work to become stable, but they are on a good way. I would really like to see both drivers in the qemu tree, as they are the key to a design shift in how storage in the datacenter is being built. Christian