ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Li Wang <liwang@ubuntukylin.com>
To: Mark Nelson <mark.nelson@inktank.com>
Cc: "ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	Sage Weil <sage@inktank.com>
Subject: Re: Performance results on inline data support
Date: Tue, 01 Oct 2013 22:40:06 +0800	[thread overview]
Message-ID: <524ADEC6.10405@ubuntukylin.com> (raw)
In-Reply-To: <524ADC5F.4010400@inktank.com>

True. 'How small is small' seems to be a common issue for small file 
optimization. I guess there is an optimal threshold there, and maybe 
depending on the workloads, even the number of MDSs. It is obviously 
advantageous for read-only applications, and it comes at a cost of 
pollution of the MDS's metadata cache, at least, leave less space for 
metadata cache.

Cheers,
Li Wang

On 10/01/2013 10:29 PM, Mark Nelson wrote:
> Great.  I'd be curious to know what the right limits are. :)
>
> Mark
>
> On 10/01/2013 09:21 AM, Li Wang wrote:
>> Currently it is 4KB, but we will implement it as a tunable parameter.
>>
>> Cheers,
>> Li Wang
>>
>> On 09/30/2013 08:39 PM, Mark Nelson wrote:
>>> On 09/30/2013 03:34 AM, Li Wang wrote:
>>>> Hi,
>>>>    We did a performance test on inline data support, the Ceph
>>>> cluster is
>>>> composed of 1 MDS, 1 MON, 6 OSD on a HPC cluster. The program is
>>>> simple,
>>>> there are 1000 - 3000 files with each being 1KB. The program repeated
>>>> the following processes on each file: open(), read(), close(). The
>>>> total
>>>> time is measured with/without inline data support. The results are as
>>>> follows (seconds),
>>>>
>>>> #files  without    with
>>>> 1000    17.3674      8.7186
>>>> 2000    35.4848      17.7646
>>>> 3000    53.2164      26.4374
>>>
>>> Excellent job!  Looks like this could make a big difference for certain
>>> workloads.  How much data can it store before it switches away from
>>> inlining the data?
>>>
>>>>
>>>> Cheers,
>>>> Li Wang
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe
>>>> ceph-devel" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>>
>
>

      reply	other threads:[~2013-10-01 14:40 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-30  8:34 Performance results on inline data support Li Wang
2013-09-30 12:39 ` Mark Nelson
2013-10-01 14:21   ` Li Wang
2013-10-01 14:29     ` Mark Nelson
2013-10-01 14:40       ` Li Wang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=524ADEC6.10405@ubuntukylin.com \
    --to=liwang@ubuntukylin.com \
    --cc=ceph-devel@vger.kernel.org \
    --cc=mark.nelson@inktank.com \
    --cc=sage@inktank.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).