* Re: [GSoc] : ceph-mgr: Smarter Reweight-by-Utilization
[not found] <CAOnoake3+pUX0tN4s8VVBxifPFFirOuyQ5Nr7E5icD2z75+1fQ@mail.gmail.com>
@ 2017-03-28 13:33 ` Kefu Chai
2017-03-28 23:54 ` Methuku Karthik
0 siblings, 1 reply; 2+ messages in thread
From: Kefu Chai @ 2017-03-28 13:33 UTC (permalink / raw)
To: Methuku Karthik; +Cc: ceph-devel, mynaramana
+ ceph-devel
----- Original Message -----
> From: "Methuku Karthik" <kmeth@seas.upenn.edu>
> To: tchaikov@gmail.com, ceph-devel@vger.kernel.org, kchai@redhat.com
> Cc: mynaramana@gmail.com
> Sent: Tuesday, March 28, 2017 4:17:52 AM
> Subject: [GSoc] : ceph-mgr: Smarter Reweight-by-Utilization
>
> Hi Everyone,
>
> My name is Karthik. I am a first year graduate student in Embedded Systems
> at University of Pennsylvania. I am a avid c, c++ and python programmer.I
> have 4 years of work experience as Software developer at Airbus.
>
> I have been working as research assistant in PRECISE lab at the
> University of Pennsylvania to evaluate the performance of the Xen's RTDS
> scheduler.
>
> Currently, I am doing a course on distributed systems. As a part of that
> course ,I am building a small cloud platform using gRPC (Google's high
> performance , open-source RPC framework) with the following features:
>
> (1)Webmail service (SMTP & POP3) to send, receive and forward mails
> (2)A fault-tolerant backend server that employs key-Value store similar to
> Google's Bigtable.
> (3)The entire Bigtable is distributed across multiple backend servers.
> (4)Frontend Http server to process requests from a browser, retrieve
> appropriate data from the backend server and construct the http response
> for the GUI.
> (5)Storage service (Similar to Google Drive) with support for navigating
> the directories, folder creation and uploading and downloading any file
> type.
> (6)This system will be fault tolerant with quorum based causal replication
> done across multiple nodes and load balancing done with dynamic
> distribution of users among different groups.
>
> I compiled and hosted a small cluster to observe how ceph works in storing
> the data and how the distribution of the data is maintained while ensuring
> fault tolerance.With the help of my friend Myna (cc.ed), I could come to
> speed and performed few experiments to observe how data is shuffled after
> bringing down one osd or by adding one osd.
>
> I am currently doing literature review on crush algorithm and understanding
> the Ceph Architecture.
>
> It would be exciting to work on project "ceph-mgr: Smarter
> Reweight-by-Utilization"
>
> Can you point me to any resources that guide to evaluate performace of
> storage system ?
i think the focus of "smarter reweight-by-utilization" would be to have a
better balanced distribution of data in cluster. there are a lot of related
discussion recently on our mailing list.
>
> What kind of factors should one consider to evaluate performace of a
> storage system ?
latency and throughput, availability, cost, flexibility, etc. i think there
are lots of factors one should consider. but it depends on the use case.
> I could think of response time for reading, writing and deleting a file or
> how quickly a node is configured into a cluster or how quickly cluster
> heals after a node dies.
>
> Please suggest me some existing simple beginner bug which would give me a
> chance to explore the code.
>
i think it's important for you to find one at http://tracker.ceph.com, or
better off, to identify a bug by using Ceph.
> I'm very much interested in Ceph. I want to become a Ceph contributor in
> the near future.
>
> Thank you very much for your help!
>
> Best,
> Karthik
>
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [GSoc] : ceph-mgr: Smarter Reweight-by-Utilization
2017-03-28 13:33 ` [GSoc] : ceph-mgr: Smarter Reweight-by-Utilization Kefu Chai
@ 2017-03-28 23:54 ` Methuku Karthik
0 siblings, 0 replies; 2+ messages in thread
From: Methuku Karthik @ 2017-03-28 23:54 UTC (permalink / raw)
To: Kefu Chai; +Cc: ceph-devel, Myna V
Hi Kefu Chai,
Thanks for the response.
On Tue, Mar 28, 2017 at 9:33 AM, Kefu Chai <kchai@redhat.com> wrote:
> + ceph-devel
>
> ----- Original Message -----
>> From: "Methuku Karthik" <kmeth@seas.upenn.edu>
>> To: tchaikov@gmail.com, ceph-devel@vger.kernel.org, kchai@redhat.com
>> Cc: mynaramana@gmail.com
>> Sent: Tuesday, March 28, 2017 4:17:52 AM
>> Subject: [GSoc] : ceph-mgr: Smarter Reweight-by-Utilization
>>
>> Hi Everyone,
>>
>> My name is Karthik. I am a first year graduate student in Embedded Systems
>> at University of Pennsylvania. I am a avid c, c++ and python programmer.I
>> have 4 years of work experience as Software developer at Airbus.
>>
>> I have been working as research assistant in PRECISE lab at the
>> University of Pennsylvania to evaluate the performance of the Xen's RTDS
>> scheduler.
>>
>> Currently, I am doing a course on distributed systems. As a part of that
>> course ,I am building a small cloud platform using gRPC (Google's high
>> performance , open-source RPC framework) with the following features:
>>
>> (1)Webmail service (SMTP & POP3) to send, receive and forward mails
>> (2)A fault-tolerant backend server that employs key-Value store similar to
>> Google's Bigtable.
>> (3)The entire Bigtable is distributed across multiple backend servers.
>> (4)Frontend Http server to process requests from a browser, retrieve
>> appropriate data from the backend server and construct the http response
>> for the GUI.
>> (5)Storage service (Similar to Google Drive) with support for navigating
>> the directories, folder creation and uploading and downloading any file
>> type.
>> (6)This system will be fault tolerant with quorum based causal replication
>> done across multiple nodes and load balancing done with dynamic
>> distribution of users among different groups.
>>
>> I compiled and hosted a small cluster to observe how ceph works in storing
>> the data and how the distribution of the data is maintained while ensuring
>> fault tolerance.With the help of my friend Myna (cc.ed), I could come to
>> speed and performed few experiments to observe how data is shuffled after
>> bringing down one osd or by adding one osd.
>>
>> I am currently doing literature review on crush algorithm and understanding
>> the Ceph Architecture.
>>
>> It would be exciting to work on project "ceph-mgr: Smarter
>> Reweight-by-Utilization"
>>
>> Can you point me to any resources that guide to evaluate performace of
>> storage system ?
>
> i think the focus of "smarter reweight-by-utilization" would be to have a
> better balanced distribution of data in cluster. there are a lot of related
> discussion recently on our mailing list.
>
>>
>> What kind of factors should one consider to evaluate performace of a
>> storage system ?
>
> latency and throughput, availability, cost, flexibility, etc. i think there
> are lots of factors one should consider. but it depends on the use case.
>
>> I could think of response time for reading, writing and deleting a file or
>> how quickly a node is configured into a cluster or how quickly cluster
>> heals after a node dies.
>>
>> Please suggest me some existing simple beginner bug which would give me a
>> chance to explore the code.
>>
>
> i think it's important for you to find one at http://tracker.ceph.com, or
> better off, to identify a bug by using Ceph.
>
I looked into the bugs marked for ceph-mgr ,Bug #17453 : ceph-mgr
doesn't forget about MDS daemons that have gone away.
Do you think it will be a good start ?
>> I'm very much interested in Ceph. I want to become a Ceph contributor in
>> the near future.
>>
>> Thank you very much for your help!
>>
>> Best,
>> Karthik
>>
Best,
Karthik
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2017-03-28 23:54 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CAOnoake3+pUX0tN4s8VVBxifPFFirOuyQ5Nr7E5icD2z75+1fQ@mail.gmail.com>
2017-03-28 13:33 ` [GSoc] : ceph-mgr: Smarter Reweight-by-Utilization Kefu Chai
2017-03-28 23:54 ` Methuku Karthik
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox