From: Filippos Giannakos <philipgian-Sqt7GMbKoOQ@public.gmane.org>
To: Ian Colle <icolle-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org,
ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: Experiences with Ceph at the June'14 issue of USENIX ; login:
Date: Wed, 4 Jun 2014 17:22:35 +0300 [thread overview]
Message-ID: <20140604142235.GI17479@philipgian-mac> (raw)
In-Reply-To: <1235448490.9762058.1401748668812.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Hello Ian,
Thanks for your interest.
On Mon, Jun 02, 2014 at 06:37:48PM -0400, Ian Colle wrote:
> Thanks, Filippos! Very interesting reading.
>
> Are you comfortable enough yet to remove the RAID-1 from your architecture and
> get all that space back?
Actually, we are not ready to do that yet. There are three major things to
consider.
First, to be able to get rid of the RAID-1 setup, we need to increase the
replication level to at least 3x. So the space gain is not that great to begin
with.
Second, this operation can take about a month for our scale according to our
calculations and previous experience. During this period of increased I/O we
might get peaks of performance degradation. Plus, we currently do not have the
necessary hardware available to increase the replication level before we get rid
of the RAID setup.
Third, we have a few disk failures per month. The RAID-1 setup has allowed us to
seamlessly replace them without any hiccup or even a clue to the end user that
something went wrong. Surely we can rely on RADOS to avoid any data loss, but if
we currently rely on RADOS for recovery there might be some (minor) performance
degradation, especially for the VM I/O traffic.
Kind Regards,
--
Filippos
<philipgian-Sqt7GMbKoOQ@public.gmane.org>
next prev parent reply other threads:[~2014-06-04 14:22 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-02 18:32 Experiences with Ceph at the June'14 issue of USENIX ; login: Filippos Giannakos
2014-06-02 18:51 ` [ceph-users] " Patrick McGarry
2014-06-02 21:40 ` Experiences with Ceph at the June'14 issue of USENIX ;login: Robin H. Johnson
2014-06-03 9:12 ` Constantinos Venetsanopoulos
2014-06-02 22:37 ` [ceph-users] Experiences with Ceph at the June'14 issue of USENIX ; login: Ian Colle
[not found] ` <1235448490.9762058.1401748668812.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2014-06-04 14:22 ` Filippos Giannakos [this message]
2014-06-05 6:59 ` Christian Balzer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140604142235.GI17479@philipgian-mac \
--to=philipgian-sqt7gmbkooq@public.gmane.org \
--cc=ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org \
--cc=icolle-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox