From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stefan Priebe Subject: Re: Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling? Date: Wed, 02 Jul 2014 21:00:51 +0200 Message-ID: <53B456E3.1060909@profihost.ag> References: <539B11C4.1010400@profihost.ag> <53AD1411.101@profihost.ag> <53B40292.2000108@profihost.ag> <53B40526.7080907@profihost.ag> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Gregory Farnum Cc: "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , ceph-users List-Id: ceph-devel.vger.kernel.org Am 02.07.2014 16:00, schrieb Gregory Farnum: > Yeah, it's fighting for attention with a lot of other urgent stuff. :( > > Anyway, even if you can't look up any details or reproduce at this > time, I'm sure you know what shape the cluster was (number of OSDs, > running on SSDs or hard drives, etc), and that would be useful > guidance. :) Sure Number of OSDs: 24 Each OSD has an SSD capable of doing tested with fio before installing ceph (70.000 iop/s 4k write, 580MB/s seq. write 1MB blocks) Single Xeon E5-1620 v2 @ 3.70GHz 48GB RAM Stefan > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > > > On Wed, Jul 2, 2014 at 6:12 AM, Stefan Priebe - Profihost AG > wrote: >> >> Am 02.07.2014 15:07, schrieb Haomai Wang: >>> Could you give some perf counter from rbd client side? Such as op latency? >> >> Sorry haven't any counters. As this mail was some days unseen - i >> thought nobody has an idea or could help. >> >> Stefan >> >>> On Wed, Jul 2, 2014 at 9:01 PM, Stefan Priebe - Profihost AG >>> wrote: >>>> Am 02.07.2014 00:51, schrieb Gregory Farnum: >>>>> On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG >>>>> wrote: >>>>>> Hi Greg, >>>>>> >>>>>> Am 26.06.2014 02:17, schrieb Gregory Farnum: >>>>>>> Sorry we let this drop; we've all been busy traveling and things. >>>>>>> >>>>>>> There have been a lot of changes to librados between Dumpling and >>>>>>> Firefly, but we have no idea what would have made it slower. Can you >>>>>>> provide more details about how you were running these tests? >>>>>> >>>>>> it's just a normal fio run: >>>>>> fio --ioengine=rbd --bs=4k --name=foo --invalidate=0 >>>>>> --readwrite=randwrite --iodepth=32 --rbdname=fio_test2 --pool=teststor >>>>>> --runtime=90 --numjobs=32 --direct=1 --group >>>>>> >>>>>> Running one time with firefly libs and one time with dumpling libs. >>>>>> Traget is always the same pool on a firefly ceph storage. >>>>> >>>>> What's the backing cluster you're running against? What kind of CPU >>>>> usage do you see with both? 25k IOPS is definitely getting up there, >>>>> but I'd like some guidance about whether we're looking for a reduction >>>>> in parallelism, or an increase in per-op costs, or something else. >>>> >>>> Hi Greg, >>>> >>>> i don't have that test cluster anymore. It had to go into production >>>> with dumpling. >>>> >>>> So i can't tell you. >>>> >>>> Sorry. >>>> >>>> Stefan >>>> >>>>> -Greg >>>>> Software Engineer #42 @ http://inktank.com | http://ceph.com >>>>> -- >>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >>>>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org >>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>>> >>>> _______________________________________________ >>>> ceph-users mailing list >>>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >>> >>>