All of lore.kernel.org
 help / color / mirror / Atom feed
From: Bryant Ng <bryantng@gmail.com>
To: ceph-devel@vger.kernel.org
Subject: Re: Crush and Monitor questions
Date: Tue, 11 Dec 2012 17:15:36 -0800	[thread overview]
Message-ID: <ka8lrn$o20$1@ger.gmane.org> (raw)
In-Reply-To: <ka84oo$sa3$1@ger.gmane.org>

Sorry, I misread the Hardware Configuration section of the Ceph 
documentation.  I thought one of the Dell's was a configuration for the 
monitors but both of the Dell R510 and R515 are OSD configuration.

I had another question on the monitors though.  What kind of load 
(requests/second) can a monitor node handle?  My understanding is that 
it just returns the cluster map to the client requesting it?  The 
documentation mentions 3 to 5 monitors in a ceph cluster but what is the 
request rate expected on each of these monitors?

thanks.
Bryant


Bryant Ng wrote:
> Hi,
>
> I'm pretty new to Ceph and am just learning about it.
>
> Where are the CRUSH maps stored in Ceph? In the documentation I see you
> use the 'crushtool' to compile and decompile the crush map.  I
> understand that if a single monitor comes online, it can talk to the
> other existing monitors to get the cluster map but how does it work on
> initial startup?  Or if the entire Ceph clusters goes down b/c of power
> failure or something.
>
> What is the recommended hardware configuration for monitors?  In the
> Hardware Recommendation page it says "A monitor requires approximately
> 10GB of storage space per daemon instance."  per daemon instance is
> talking about the monitor daemon, not the osd daemons?
>
> Also further down on that page, it list some hardware examples where it
> mentions a ligher configuration for monitors.  I am assuming that is the
> Dell PE R510 which contains 8 x 2 TB drives.  Why does the monitor need
> so much space if it's "10GB of storage space per daemon instance".
>
> -Bryant
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



  reply	other threads:[~2012-12-12  1:16 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-12-11 20:23 Crush and Monitor questions Bryant Ng
2012-12-12  1:15 ` Bryant Ng [this message]
2012-12-12 17:37 ` Joao Eduardo Luis
2012-12-12 19:02   ` Bryant Ng
2012-12-12 19:32     ` Joao Eduardo Luis
2012-12-13 19:43       ` Bryant Ng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='ka8lrn$o20$1@ger.gmane.org' \
    --to=bryantng@gmail.com \
    --cc=ceph-devel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.