linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Brown <david.brown@hesbynett.no>
To: linux-raid@vger.kernel.org
Subject: Re: Question regarding --backup-file
Date: Mon, 02 May 2011 21:05:28 +0200	[thread overview]
Message-ID: <ipmv9o$kcv$1@dough.gmane.org> (raw)
In-Reply-To: <001501cc08ef$f60cc620$e2265260$@priv.hu>

On 02/05/11 19:39, Peter Kovari wrote:
>>> Hi all,
>>>
>>> I understand, that a change from RAID5 to RAID6 by adding a single disk
> -
>>> eg. keeping the number of data disks - requires a backup file throughout
> the
>>> whole reshape process. For a larger, multi-TB array this means millions
> of
>>> writes to the backup file, which - if i'm correct - means means millions
> of
>>> writes to the same physical sectors of the disk that holds the backup
> file.
>>> Is this not problematic? How many write operations can a typical drive
>>> tolerate nowadays? (on the same sectors)
>
>> Lots, where Lots>= 1 and Lots<  infinity.
>
>> I've never seen rotating media specify any form of limitation to writes.
>> Have you?
>
> No, that's why i'm asking.
>
> Imho, in typical usage, write cycle counts on a certain sector may not be
> that high, even on a database server. I doubt it ever goes over a few
> hundred thousands during the life cycle of the hard disk. On the other hand,
> a single reshape on a larger array can trigger tens of millions of write
> cycles on certain sectors. Sectors do fail eventually, so I'm wondering if
> the "no limit" is truly a no limit, or manufacturers just won't state this
> info because in "normal" usage, customers will never reach that limit.
>
> Btw, i'm sure SSD's are not meant to take such a pressure.
>

Good large SSD's can be written to continuously for /years/ before they 
wear out.  It can be a different matter for smaller and cheapo drives, 
but it's not an issue for good disks now.  Suppose you have a 128 GB 
disk with SLC flash.  Each sector is good for roughly 100,000 
erase/re-write cycles (or more, if you are kind to the disk and keep it 
cool).  Since wear-leveling spreads the writes around the disk, you can 
write 100,000 x 128 GB of data - at 200 MB/s continuously, that would 
take 2 years without a pause for breath.  Even if the wear-leveling 
isn't perfect, and even if you substitute a cheaper MLC SSD (with 10,000 
cycles), the effort of being the backup file for a raid reshape is not 
going to be a challenge.

Also, some SSD's have super-cap backed up ram caches - writes can be 
safely buffered before being written.  If you overwrite the same sector 
fast enough, it will never actually be written to the flash (until the 
final write, of course).

For hard disks, sectors do wear out, but they tolerate a lot of writes 
first.  And the hard disk firmware will re-locate the worn out sector 
transparently.



      reply	other threads:[~2011-05-02 19:05 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-02 15:13 Question regarding --backup-file Peter Kovari
2011-05-02 15:34 ` Brad Campbell
2011-05-02 17:39   ` Peter Kovari
2011-05-02 19:05     ` David Brown [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='ipmv9o$kcv$1@dough.gmane.org' \
    --to=david.brown@hesbynett.no \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).