* Performance results on inline data support
@ 2013-09-30 8:34 Li Wang
2013-09-30 12:39 ` Mark Nelson
0 siblings, 1 reply; 5+ messages in thread
From: Li Wang @ 2013-09-30 8:34 UTC (permalink / raw)
To: ceph-devel@vger.kernel.org; +Cc: Sage Weil
Hi,
We did a performance test on inline data support, the Ceph cluster is
composed of 1 MDS, 1 MON, 6 OSD on a HPC cluster. The program is simple,
there are 1000 - 3000 files with each being 1KB. The program repeated
the following processes on each file: open(), read(), close(). The total
time is measured with/without inline data support. The results are as
follows (seconds),
#files without with
1000 17.3674 8.7186
2000 35.4848 17.7646
3000 53.2164 26.4374
Cheers,
Li Wang
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Performance results on inline data support
2013-09-30 8:34 Performance results on inline data support Li Wang
@ 2013-09-30 12:39 ` Mark Nelson
2013-10-01 14:21 ` Li Wang
0 siblings, 1 reply; 5+ messages in thread
From: Mark Nelson @ 2013-09-30 12:39 UTC (permalink / raw)
To: Li Wang; +Cc: ceph-devel@vger.kernel.org, Sage Weil
On 09/30/2013 03:34 AM, Li Wang wrote:
> Hi,
> We did a performance test on inline data support, the Ceph cluster is
> composed of 1 MDS, 1 MON, 6 OSD on a HPC cluster. The program is simple,
> there are 1000 - 3000 files with each being 1KB. The program repeated
> the following processes on each file: open(), read(), close(). The total
> time is measured with/without inline data support. The results are as
> follows (seconds),
>
> #files without with
> 1000 17.3674 8.7186
> 2000 35.4848 17.7646
> 3000 53.2164 26.4374
Excellent job! Looks like this could make a big difference for certain
workloads. How much data can it store before it switches away from
inlining the data?
>
> Cheers,
> Li Wang
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Performance results on inline data support
2013-09-30 12:39 ` Mark Nelson
@ 2013-10-01 14:21 ` Li Wang
2013-10-01 14:29 ` Mark Nelson
0 siblings, 1 reply; 5+ messages in thread
From: Li Wang @ 2013-10-01 14:21 UTC (permalink / raw)
To: Mark Nelson; +Cc: ceph-devel@vger.kernel.org, Sage Weil
Currently it is 4KB, but we will implement it as a tunable parameter.
Cheers,
Li Wang
On 09/30/2013 08:39 PM, Mark Nelson wrote:
> On 09/30/2013 03:34 AM, Li Wang wrote:
>> Hi,
>> We did a performance test on inline data support, the Ceph cluster is
>> composed of 1 MDS, 1 MON, 6 OSD on a HPC cluster. The program is simple,
>> there are 1000 - 3000 files with each being 1KB. The program repeated
>> the following processes on each file: open(), read(), close(). The total
>> time is measured with/without inline data support. The results are as
>> follows (seconds),
>>
>> #files without with
>> 1000 17.3674 8.7186
>> 2000 35.4848 17.7646
>> 3000 53.2164 26.4374
>
> Excellent job! Looks like this could make a big difference for certain
> workloads. How much data can it store before it switches away from
> inlining the data?
>
>>
>> Cheers,
>> Li Wang
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Performance results on inline data support
2013-10-01 14:21 ` Li Wang
@ 2013-10-01 14:29 ` Mark Nelson
2013-10-01 14:40 ` Li Wang
0 siblings, 1 reply; 5+ messages in thread
From: Mark Nelson @ 2013-10-01 14:29 UTC (permalink / raw)
To: Li Wang; +Cc: ceph-devel@vger.kernel.org, Sage Weil
Great. I'd be curious to know what the right limits are. :)
Mark
On 10/01/2013 09:21 AM, Li Wang wrote:
> Currently it is 4KB, but we will implement it as a tunable parameter.
>
> Cheers,
> Li Wang
>
> On 09/30/2013 08:39 PM, Mark Nelson wrote:
>> On 09/30/2013 03:34 AM, Li Wang wrote:
>>> Hi,
>>> We did a performance test on inline data support, the Ceph cluster is
>>> composed of 1 MDS, 1 MON, 6 OSD on a HPC cluster. The program is simple,
>>> there are 1000 - 3000 files with each being 1KB. The program repeated
>>> the following processes on each file: open(), read(), close(). The total
>>> time is measured with/without inline data support. The results are as
>>> follows (seconds),
>>>
>>> #files without with
>>> 1000 17.3674 8.7186
>>> 2000 35.4848 17.7646
>>> 3000 53.2164 26.4374
>>
>> Excellent job! Looks like this could make a big difference for certain
>> workloads. How much data can it store before it switches away from
>> inlining the data?
>>
>>>
>>> Cheers,
>>> Li Wang
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Performance results on inline data support
2013-10-01 14:29 ` Mark Nelson
@ 2013-10-01 14:40 ` Li Wang
0 siblings, 0 replies; 5+ messages in thread
From: Li Wang @ 2013-10-01 14:40 UTC (permalink / raw)
To: Mark Nelson; +Cc: ceph-devel@vger.kernel.org, Sage Weil
True. 'How small is small' seems to be a common issue for small file
optimization. I guess there is an optimal threshold there, and maybe
depending on the workloads, even the number of MDSs. It is obviously
advantageous for read-only applications, and it comes at a cost of
pollution of the MDS's metadata cache, at least, leave less space for
metadata cache.
Cheers,
Li Wang
On 10/01/2013 10:29 PM, Mark Nelson wrote:
> Great. I'd be curious to know what the right limits are. :)
>
> Mark
>
> On 10/01/2013 09:21 AM, Li Wang wrote:
>> Currently it is 4KB, but we will implement it as a tunable parameter.
>>
>> Cheers,
>> Li Wang
>>
>> On 09/30/2013 08:39 PM, Mark Nelson wrote:
>>> On 09/30/2013 03:34 AM, Li Wang wrote:
>>>> Hi,
>>>> We did a performance test on inline data support, the Ceph
>>>> cluster is
>>>> composed of 1 MDS, 1 MON, 6 OSD on a HPC cluster. The program is
>>>> simple,
>>>> there are 1000 - 3000 files with each being 1KB. The program repeated
>>>> the following processes on each file: open(), read(), close(). The
>>>> total
>>>> time is measured with/without inline data support. The results are as
>>>> follows (seconds),
>>>>
>>>> #files without with
>>>> 1000 17.3674 8.7186
>>>> 2000 35.4848 17.7646
>>>> 3000 53.2164 26.4374
>>>
>>> Excellent job! Looks like this could make a big difference for certain
>>> workloads. How much data can it store before it switches away from
>>> inlining the data?
>>>
>>>>
>>>> Cheers,
>>>> Li Wang
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe
>>>> ceph-devel" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2013-10-01 14:40 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-30 8:34 Performance results on inline data support Li Wang
2013-09-30 12:39 ` Mark Nelson
2013-10-01 14:21 ` Li Wang
2013-10-01 14:29 ` Mark Nelson
2013-10-01 14:40 ` Li Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).