From mboxrd@z Thu Jan 1 00:00:00 1970 From: Josef Bacik Subject: Re: Ceph on btrfs 3.4rc Date: Thu, 3 May 2012 10:13:55 -0400 Message-ID: <20120503141354.GC1914@localhost.localdomain> References: <20120424152141.GB3326@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Cc: Sage Weil , Josef Bacik , linux-btrfs@vger.kernel.org, ceph-devel@vger.kernel.org To: Christian Brunner Return-path: In-Reply-To: List-ID: On Fri, Apr 27, 2012 at 01:02:08PM +0200, Christian Brunner wrote: > Am 24. April 2012 18:26 schrieb Sage Weil : > > On Tue, 24 Apr 2012, Josef Bacik wrote: > >> On Fri, Apr 20, 2012 at 05:09:34PM +0200, Christian Brunner wrote: > >> > After running ceph on XFS for some time, I decided to try btrfs = again. > >> > Performance with the current "for-linux-min" branch and big meta= data > >> > is much better. The only problem (?) I'm still seeing is a warni= ng > >> > that seems to occur from time to time: > > > > Actually, before you do that... we have a new tool, > > test_filestore_workloadgen, that generates a ceph-osd-like workload= on the > > local file system. =A0It's a subset of what a full OSD might do, bu= t if > > we're lucky it will be sufficient to reproduce this issue. =A0Somet= hing like > > > > =A0test_filestore_workloadgen --osd-data /foo --osd-journal /bar > > > > will hopefully do the trick. > > > > Christian, maybe you can see if that is able to trigger this warnin= g? > > You'll need to pull it from the current master branch; it wasn't in= the > > last release. >=20 > Trying to reproduce with test_filestore_workloadgen didn't work for > me. So here are some instructions on how to reproduce with a minimal > ceph setup. >=20 > You will need a single system with two disks and a bit of memory. >=20 > - Compile and install ceph (detailed instructions: > http://ceph.newdream.net/docs/master/ops/install/mkcephfs/) >=20 > - For the test setup I've used two tmpfs files as journal devices. To > create these, do the following: >=20 > # mkdir -p /ceph/temp > # mount -t tmpfs tmpfs /ceph/temp > # dd if=3D/dev/zero of=3D/ceph/temp/journal0 count=3D500 bs=3D1024k > # dd if=3D/dev/zero of=3D/ceph/temp/journal1 count=3D500 bs=3D1024k >=20 > - Now you should create and mount btrfs. Here is what I did: >=20 > # mkfs.btrfs -l 64k -n 64k /dev/sda > # mkfs.btrfs -l 64k -n 64k /dev/sdb > # mkdir /ceph/osd.000 > # mkdir /ceph/osd.001 > # mount -o noatime,space_cache,inode_cache,autodefrag /dev/sda /ceph/= osd.000 > # mount -o noatime,space_cache,inode_cache,autodefrag /dev/sdb /ceph/= osd.001 >=20 > - Create /etc/ceph/ceph.conf similar to the attached ceph.conf. You > will probably have to change the btrfs devices and the hostname > (os39). >=20 > - Create the ceph filesystems: >=20 > # mkdir /ceph/mon > # mkcephfs -a -c /etc/ceph/ceph.conf >=20 > - Start ceph (e.g. "service ceph start") >=20 > - Now you should be able to use ceph - "ceph -s" will tell you about > the state of the ceph cluster. >=20 > - "rbd create -size 100 testimg" will create an rbd image on the ceph= cluster. >=20 It's failing here http://fpaste.org/e3BG/ Thanks, Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" = in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html