From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1N0zcT-00040j-Ro for qemu-devel@nongnu.org; Thu, 22 Oct 2009 11:30:29 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1N0zcP-0003ym-88 for qemu-devel@nongnu.org; Thu, 22 Oct 2009 11:30:29 -0400 Received: from [199.232.76.173] (port=36611 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1N0zcP-0003yP-0W for qemu-devel@nongnu.org; Thu, 22 Oct 2009 11:30:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:8855) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1N0zcO-0008Cf-Im for qemu-devel@nongnu.org; Thu, 22 Oct 2009 11:30:24 -0400 Message-ID: <4AE07A7F.8000002@redhat.com> Date: Thu, 22 Oct 2009 17:30:07 +0200 From: Avi Kivity MIME-Version: 1.0 References: <4ADE988B.2070303@lab.ntt.co.jp> In-Reply-To: <4ADE988B.2070303@lab.ntt.co.jp> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [ANNOUNCE] Sheepdog: Distributed Storage System for KVM List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: MORITA Kazutaka Cc: linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org, kvm@vger.kernel.org On 10/21/2009 07:13 AM, MORITA Kazutaka wrote: > Hi everyone, > > Sheepdog is a distributed storage system for KVM/QEMU. It provides > highly available block level storage volumes to VMs like Amazon EBS. > Sheepdog supports advanced volume management features such as snapshot, > cloning, and thin provisioning. Sheepdog runs on several tens or hundreds > of nodes, and the architecture is fully symmetric; there is no central > node such as a meta-data server. Very interesting! From a very brief look at the code, it looks like the sheepdog block format driver is a network client that is able to access highly available images, yes? If so, is it reasonable to compare this to a cluster file system setup (like GFS) with images as files on this filesystem? The difference would be that clustering is implemented in userspace in sheepdog, but in the kernel for a clustering filesystem. How is load balancing implemented? Can you move an image transparently while a guest is running? Will an image be moved closer to its guest? Can you stripe an image across nodes? Do you support multiple guests accessing the same image? What about fault tolerance - storing an image redundantly on multiple nodes? -- error compiling committee.c: too many arguments to function