linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Phillip Susi <psusi@cfl.rr.com>
To: Dragon <Sunghost@gmx.de>
Cc: linux-raid@vger.kernel.org
Subject: Mail etiquette
Date: Tue, 21 Jun 2011 10:46:25 -0400	[thread overview]
Message-ID: <4E00AEC1.9010107@cfl.rr.com> (raw)
In-Reply-To: <20110618203954.129920@gmx.net>

You keep creating new threads with no subject when you reply.  This is 
rude and annoying.  When replying to a message you should be using your 
mail client's reply function, rather than start a new message and paste 
in your quotations.  This should preserve the subject line and add the 
proper In-Reply-To: header so that the message is properly sorted into 
the existing thread.  If you have been using your mail client's reply 
function instead of composing a new message, then your mail client is 
broken so please use another one.

On 6/18/2011 4:39 PM, Dragon wrote:
> Monitor your background reshape with "cat /proc/mdstat".
>
> When the reshape is complete, the extra disk will be marked "spare".
>
> Then you can use "mdadm --remove".
> -->after a view days the reshape was done and i take the disk out of the raid ->  many thx for that
>
>> at this point i think i take the disk out of the raid, because i need the space of
> the disk.
>
> Understood, but you are living on the edge.  You have no backup, and only one drive
> of redundancy.  If one of your drives does fail, the odds of losing the whole array
> while replacing it is significant.  Your Samsung drives claim a non-recoverable read
> error rate of 1 per 1x10^15 bits.  Your eleven data disks contain 1.32x10^14 bits,
> all of which must be read during rebuild.  That means a _13%_ chance of total
> failure while replacing a failed drive.
>
> I hope your 16T of data is not terribly important to you, or is otherwise replaceable.
> -->  nice calculation, where do you have the data from?
> -->  most of it is important, i will look for a better solution
>
>> I need another advise of you. While the computer is actualy build with 13 disk and
> i will become more data in the next month and the limit of power supply
> connecotors is reached i am looking forward to another solution. one possibility
> is to build up a better computer with more sata and sas connectors and add further
> raid-controller-cards. an other idea is to build a kind of cluster or dfs with two
> and later 3,4... computer. i read something about gluster.org. do you have a tip
> for me or experience in this?
>
> Unfortunately, no.  Although I skirt the edges in my engineering work, I'm primarily
> an end-user.  Both personal and work projects have relatively modest needs.  From
> the engineering side, I do recommend you spend extra on power supplies&  UPS.
>
> Phil
> -->  and than, ext4 max size is actually 16TB, what should i do?
> -->  for an end-user you have many knowledge about swraid ;)
> sunny


      parent reply	other threads:[~2011-06-21 14:46 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-18 20:39 (unknown) Dragon
2011-06-19 18:40 ` Phil Turmel
2011-06-21 14:46 ` Phillip Susi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4E00AEC1.9010107@cfl.rr.com \
    --to=psusi@cfl.rr.com \
    --cc=Sunghost@gmx.de \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).