linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* SSD + Rust as raid1
@ 2013-05-30 21:23 Dominic Raferd
  2013-05-31  0:22 ` Mathias Burén
  2013-05-31  7:30 ` Roman Mamedov
  0 siblings, 2 replies; 18+ messages in thread
From: Dominic Raferd @ 2013-05-30 21:23 UTC (permalink / raw)
  To: linux-raid

I am wanting to (re)set up my raid1 array with 1 SSD partition 
(/dev/sda2) and 1 'rust' drive partition (/dev/sdb1) , both 240GB. SSD 
will also have a very small ext2 partition mounted as /mnt/sda1 (to hold 
the write-intent bitmap file).

To get the full speed of the SSD for reads and to maximise speed for 
writes I plan to use these settings:

mdadm -C /dev/md0 --level=raid1 -n 2 /dev/sda2 --write-mostly 
--write-behind=16384 --bitmap=/mnt/sda1/write-intent-bitmap.file 
--bitmap-chunk=256M dev/sdb1

Will this work and be optimal? I am not too bothered about the 
write-intent bitmap file but it is not optional I think, hence I set a 
large chunk size, could or should it be even larger (or smaller)? 
Similarly I have tried to set a high write-behind to speed up writes as 
much as possible - is 16384 the maximum allowed?

My system has mdadm 3.2.3 and kernel 3.2.26.

Any suggestions gratefully received.

Dominic

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-05-30 21:23 SSD + Rust as raid1 Dominic Raferd
@ 2013-05-31  0:22 ` Mathias Burén
  2013-05-31  7:02   ` Dominic Raferd
  2013-05-31  7:30 ` Roman Mamedov
  1 sibling, 1 reply; 18+ messages in thread
From: Mathias Burén @ 2013-05-31  0:22 UTC (permalink / raw)
  To: Dominic Raferd; +Cc: Linux-RAID

On 30 May 2013 22:23, Dominic Raferd <dominic@timedicer.co.uk> wrote:
> I am wanting to (re)set up my raid1 array with 1 SSD partition (/dev/sda2)
> and 1 'rust' drive partition (/dev/sdb1) , both 240GB. SSD will also have a
> very small ext2 partition mounted as /mnt/sda1 (to hold the write-intent
> bitmap file).
>
> To get the full speed of the SSD for reads and to maximise speed for writes
> I plan to use these settings:
>
> mdadm -C /dev/md0 --level=raid1 -n 2 /dev/sda2 --write-mostly
> --write-behind=16384 --bitmap=/mnt/sda1/write-intent-bitmap.file
> --bitmap-chunk=256M dev/sdb1
>
> Will this work and be optimal? I am not too bothered about the write-intent
> bitmap file but it is not optional I think, hence I set a large chunk size,
> could or should it be even larger (or smaller)? Similarly I have tried to
> set a high write-behind to speed up writes as much as possible - is 16384
> the maximum allowed?
>
> My system has mdadm 3.2.3 and kernel 3.2.26.
>
> Any suggestions gratefully received.
>
> Dominic
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


Quick question: Why not a 2 drive "RAID10"?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-05-31  0:22 ` Mathias Burén
@ 2013-05-31  7:02   ` Dominic Raferd
  0 siblings, 0 replies; 18+ messages in thread
From: Dominic Raferd @ 2013-05-31  7:02 UTC (permalink / raw)
  To: Linux-RAID

On 31/05/13 01:22, Mathias Burén wrote:
> On 30 May 2013 22:23, Dominic Raferd <dominic@timedicer.co.uk> wrote:
>> I am wanting to (re)set up my raid1 array with 1 SSD partition (/dev/sda2)
>> and 1 'rust' drive partition (/dev/sdb1) , both 240GB. SSD will also have a
>> very small ext2 partition mounted as /mnt/sda1 (to hold the write-intent
>> bitmap file).
>>
>> To get the full speed of the SSD for reads and to maximise speed for writes
>> I plan to use these settings:
>>
>> mdadm -C /dev/md0 --level=raid1 -n 2 /dev/sda2 --write-mostly
>> --write-behind=16384 --bitmap=/mnt/sda1/write-intent-bitmap.file
>> --bitmap-chunk=256M dev/sdb1
>>
>> Will this work and be optimal? I am not too bothered about the write-intent
>> bitmap file but it is not optional I think, hence I set a large chunk size,
>> could or should it be even larger (or smaller)? Similarly I have tried to
>> set a high write-behind to speed up writes as much as possible - is 16384
>> the maximum allowed?
>>
>> My system has mdadm 3.2.3 and kernel 3.2.26.
>>
>> Any suggestions gratefully received.
>>
>> Dominic
>>
>>
>> Quick question: Why not a 2 drive "RAID10"?
>>

Thanks for your reply Mathias. I am not familiar enough with RAID10 to 
know if or how this would work, nor am I sure if it would work OK with 
my OS (Devil-Linux), whereas I have used mdadm with RAID1 for many years 
with conventional drives. My idea is to upgrade one of my existing 
drives to SSD and to get all the read-speed advantages of SSD and as 
much write-speed improvement as possible. If RAID10 can do this better 
than RAID1 then please tell me more!
--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-05-30 21:23 SSD + Rust as raid1 Dominic Raferd
  2013-05-31  0:22 ` Mathias Burén
@ 2013-05-31  7:30 ` Roman Mamedov
  2013-05-31  7:47   ` Dominic Raferd
  2013-06-01  0:25   ` Stan Hoeppner
  1 sibling, 2 replies; 18+ messages in thread
From: Roman Mamedov @ 2013-05-31  7:30 UTC (permalink / raw)
  To: Dominic Raferd; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 719 bytes --]

On Thu, 30 May 2013 22:23:59 +0100
Dominic Raferd <dominic@timedicer.co.uk> wrote:

> I am wanting to (re)set up my raid1 array with 1 SSD partition 
> (/dev/sda2) and 1 'rust' drive partition (/dev/sdb1) , both 240GB. SSD 

Sorry but what is "Rust drive partition"? At first I thought it's some
software layer like 'bcache' or some other non-regular block device. Then I
think I got it, but can you please leave at home your own petty cutesy
terminology when asking a serious question on a serious mailing list?

And yes I use a HDD+SSD RAID1 for a very long time, to get some improvement
(though debatable how much) you can also mark the HDD as "write-mostly"
(see man md).

-- 
With respect,
Roman

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-05-31  7:30 ` Roman Mamedov
@ 2013-05-31  7:47   ` Dominic Raferd
  2013-05-31  7:54     ` Roman Mamedov
  2013-06-01  0:25   ` Stan Hoeppner
  1 sibling, 1 reply; 18+ messages in thread
From: Dominic Raferd @ 2013-05-31  7:47 UTC (permalink / raw)
  To: linux-raid

On 31/05/2013 08:30, Roman Mamedov wrote:
> On Thu, 30 May 2013 22:23:59 +0100
> Dominic Raferd <dominic@timedicer.co.uk> wrote:
>
>> I am wanting to (re)set up my raid1 array with 1 SSD partition
>> (/dev/sda2) and 1 'rust' drive partition (/dev/sdb1) , both 240GB. SSD
> Sorry but what is "Rust drive partition"? At first I thought it's some
> software layer like 'bcache' or some other non-regular block device. Then I
> think I got it, but can you please leave at home your own petty cutesy
> terminology when asking a serious question on a serious mailing list?

Sorry no offence intended! I just mean a conventional spinning platter 
hard disk drive.

> And yes I use a HDD+SSD RAID1 for a very long time, to get some improvement
> (though debatable how much) you can also mark the HDD as "write-mostly"
> (see man md).

This is my idea too (see my OP), but I am concerned about optimisation 
(--write-behind, --bitmap and --bitmap-chunk settings) especially for 
writes.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-05-31  7:47   ` Dominic Raferd
@ 2013-05-31  7:54     ` Roman Mamedov
  2013-05-31  8:52       ` Dominic Raferd
  0 siblings, 1 reply; 18+ messages in thread
From: Roman Mamedov @ 2013-05-31  7:54 UTC (permalink / raw)
  To: Dominic Raferd; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 523 bytes --]

On Fri, 31 May 2013 08:47:00 +0100
Dominic Raferd <dominic@timedicer.co.uk> wrote:

> This is my idea too (see my OP), but I am concerned about optimisation 
> (--write-behind, --bitmap and --bitmap-chunk settings) especially for 
> writes.

> --write-behind=16384

I think this will not work, you will have to use 16383.

> --bitmap=/mnt/sda1/write-intent-bitmap.file 

Save yourself lots of maintenance headache, just use --bitmap=internal

> --bitmap-chunk=256M

Looks OK.

-- 
With respect,
Roman

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-05-31  7:54     ` Roman Mamedov
@ 2013-05-31  8:52       ` Dominic Raferd
  2013-06-04  8:13         ` Dominic Raferd
  0 siblings, 1 reply; 18+ messages in thread
From: Dominic Raferd @ 2013-05-31  8:52 UTC (permalink / raw)
  To: linux-raid

On 31/05/2013 08:54, Roman Mamedov wrote:
> On Fri, 31 May 2013 08:47:00 +0100
> Dominic Raferd <dominic@timedicer.co.uk> wrote:
>
>> This is my idea too (see my OP), but I am concerned about optimisation
>> (--write-behind, --bitmap and --bitmap-chunk settings) especially for
>> writes.
>> --write-behind=16384
> I think this will not work, you will have to use 16383.

Oh, OK, so 16383 is the maximum then?

>> --bitmap=/mnt/sda1/write-intent-bitmap.file
> Save yourself lots of maintenance headache, just use --bitmap=internal
>
>> --bitmap-chunk=256M
> Looks OK.
>

Thanks Roman, but the problem with using --bitmap=internal is that, as 
Neil Brown posted here on another topic a while ago, this requires a 
synch write to both devices, and the use-case for which write-behind was 
developed involved an external bitmap. Hence my plan to use external 
bitmap file on a fast (SSD-based) separate partition - minimises any 
slow-down caused by having to maintain the write-intent bitmap file.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-05-31  7:30 ` Roman Mamedov
  2013-05-31  7:47   ` Dominic Raferd
@ 2013-06-01  0:25   ` Stan Hoeppner
  2013-06-01  1:19     ` Keith Keller
  2013-06-01  1:30     ` Sam Bingner
  1 sibling, 2 replies; 18+ messages in thread
From: Stan Hoeppner @ 2013-06-01  0:25 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: Dominic Raferd, linux-raid

On 5/31/2013 2:30 AM, Roman Mamedov wrote:

> Sorry but what is "Rust drive partition"? At first I thought it's some
> software layer like 'bcache' or some other non-regular block device. Then I
> think I got it, but can you please leave at home your own petty cutesy
> terminology when asking a serious question on a serious mailing list?

Mechanical disk drive platters have always possessed a ferrous film
(Iron oxide--rust) coating layer which is the magnetic recording
surface.  If you look at the old 12" drives the platters are red because
back then they literally used iron oxide as the platter coating.  Today
platters are a shiny chrome color because the coatings contain other
elements as well and the coating is much more dense.

Before SSDs one could say "HDD" and everyone knew what this meant.
Today "HDD" can mean either mechanical hard disk drive or SSD. To easily
differentiate, it has become commonplace to refer to mechanical hard
drives simply as "rust" because it's quicker to type and universally
understood.  If this is the first time you're seeing this term that's a
bit surprising.

-- 
Stan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-06-01  0:25   ` Stan Hoeppner
@ 2013-06-01  1:19     ` Keith Keller
  2013-06-01  4:37       ` Stan Hoeppner
  2013-06-07 22:16       ` Bill Davidsen
  2013-06-01  1:30     ` Sam Bingner
  1 sibling, 2 replies; 18+ messages in thread
From: Keith Keller @ 2013-06-01  1:19 UTC (permalink / raw)
  To: linux-raid

On 2013-06-01, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>
> Today "HDD" can mean either mechanical hard disk drive or SSD. To easily
> differentiate, it has become commonplace to refer to mechanical hard
> drives simply as "rust" because it's quicker to type and universally
> understood.  If this is the first time you're seeing this term that's a
> bit surprising.

I think this is the only list I've ever seen "rust" used in this way.

--keith

-- 
kkeller@wombat.san-francisco.ca.us



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-06-01  0:25   ` Stan Hoeppner
  2013-06-01  1:19     ` Keith Keller
@ 2013-06-01  1:30     ` Sam Bingner
  1 sibling, 0 replies; 18+ messages in thread
From: Sam Bingner @ 2013-06-01  1:30 UTC (permalink / raw)
  To: <stan@hardwarefreak.com>
  Cc: Roman Mamedov, Dominic Raferd, <linux-raid@vger.kernel.org>

On May 31, 2013, at 2:25 PM, Stan Hoeppner <stan@hardwarefreak.com> wrote:

> On 5/31/2013 2:30 AM, Roman Mamedov wrote:
> 
>> Sorry but what is "Rust drive partition"? At first I thought it's some
>> software layer like 'bcache' or some other non-regular block device. Then I
>> think I got it, but can you please leave at home your own petty cutesy
>> terminology when asking a serious question on a serious mailing list?
> 
> Mechanical disk drive platters have always possessed a ferrous film
> (Iron oxide--rust) coating layer which is the magnetic recording
> surface.  If you look at the old 12" drives the platters are red because
> back then they literally used iron oxide as the platter coating.  Today
> platters are a shiny chrome color because the coatings contain other
> elements as well and the coating is much more dense.
> 
> Before SSDs one could say "HDD" and everyone knew what this meant.
> Today "HDD" can mean either mechanical hard disk drive or SSD. To easily
> differentiate, it has become commonplace to refer to mechanical hard
> drives simply as "rust" because it's quicker to type and universally
> understood.  If this is the first time you're seeing this term that's a
> bit surprising.
> 

It would seem to be rather misleading to call a SSD a Hard Disk Drive since they have no disks... it should be quite understandable to say HDD+SSD

I have also never heard of this "Rust" terminology, it would also seem to be rather counterintuitive to create new names for things with distinctive names already.  A Hard Drive can refer to either, yes... but not a Hard DISK Drive.

disk  
/disk/

Noun
	• A flat, thin, round object: "heavy metal disks the size of hockey pucks"; "onion soup ladled over a disk of cheese".
	• An information storage device for a computer in the shape of a round flat plate that can be rotated to give access to all parts of the disk.

v/r
Sam

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-06-01  1:19     ` Keith Keller
@ 2013-06-01  4:37       ` Stan Hoeppner
  2013-06-07 22:16       ` Bill Davidsen
  1 sibling, 0 replies; 18+ messages in thread
From: Stan Hoeppner @ 2013-06-01  4:37 UTC (permalink / raw)
  To: Keith Keller; +Cc: linux-raid

On 5/31/2013 8:19 PM, Keith Keller wrote:
> On 2013-06-01, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>>
>> Today "HDD" can mean either mechanical hard disk drive or SSD. To easily
>> differentiate, it has become commonplace to refer to mechanical hard
>> drives simply as "rust" because it's quicker to type and universally
>> understood.  If this is the first time you're seeing this term that's a
>> bit surprising.
> 
> I think this is the only list I've ever seen "rust" used in this way.

Then I'd guess you're not subbed to other storage specific lists, such
as linux-scsi, linux-ide, xfs, etc.  Obviously not everyone uses this
jargon, but it's becoming much more common.  I've been using it myself
for quite some time.  Some people use "SRD" as well meaning "spinning
rusty disk".  You'll commonly see "array of rust", "rusty RAID6", "SRD
array", etc.  "rust" is here to stay.  Get used to it.

-- 
Stan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-05-31  8:52       ` Dominic Raferd
@ 2013-06-04  8:13         ` Dominic Raferd
  2013-06-07 22:23           ` Bill Davidsen
  0 siblings, 1 reply; 18+ messages in thread
From: Dominic Raferd @ 2013-06-04  8:13 UTC (permalink / raw)
  To: linux-raid


On 31/05/2013 09:52, Dominic Raferd wrote:
> On 31/05/2013 08:54, Roman Mamedov wrote:
>> On Fri, 31 May 2013 08:47:00 +0100
>> Dominic Raferd <dominic@timedicer.co.uk> wrote:
>>
>>> This is my idea too (see my OP), but I am concerned about optimisation
>>> (--write-behind, --bitmap and --bitmap-chunk settings) especially for
>>> writes.
>>> --write-behind=16384
>> I think this will not work, you will have to use 16383.
> Oh, OK, so 16383 is the maximum then?
>
>>> --bitmap=/mnt/sda1/write-intent-bitmap.file
>> Save yourself lots of maintenance headache, just use --bitmap=internal
>>
>>> --bitmap-chunk=256M
>> Looks OK.
>>
> Thanks Roman, but the problem with using --bitmap=internal is that, as
> Neil Brown posted here on another topic a while ago, this requires a
> synch write to both devices, and the use-case for which write-behind was
> developed involved an external bitmap. Hence my plan to use external
> bitmap file on a fast (SSD-based) separate partition - minimises any
> slow-down caused by having to maintain the write-intent bitmap file.
>

I would be very grateful if someone could confirm whether, if I set up 
RAID1 and with one of the drives specify --write-mostly 
--write-behind=n, that maximum 'n' is 16383, and also whether it is 
permitted in this configuration to set --bitmap=none and thus avoid the 
overhead of maintaining a write-intent bitmap file? (My thinking is that 
for my  needs the extra safety provided by the bitmap file is overkill 
and the slowing effect (and life-shortening of my SSD) might be more 
significant.) If I have to have a bitmap file, it is presumably faster 
to have a larger chunk size, is the maximum permitted 256M?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-06-01  1:19     ` Keith Keller
  2013-06-01  4:37       ` Stan Hoeppner
@ 2013-06-07 22:16       ` Bill Davidsen
  1 sibling, 0 replies; 18+ messages in thread
From: Bill Davidsen @ 2013-06-07 22:16 UTC (permalink / raw)
  To: Keith Keller; +Cc: linux-raid

Keith Keller wrote:
> On 2013-06-01, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>> Today "HDD" can mean either mechanical hard disk drive or SSD. To easily
>> differentiate, it has become commonplace to refer to mechanical hard
>> drives simply as "rust" because it's quicker to type and universally
>> understood.  If this is the first time you're seeing this term that's a
>> bit surprising.
> I think this is the only list I've ever seen "rust" used in this way.
>
>

I have seen "rotating rust" used elsewhere, not particularly on a list.

-- 
Bill Davidsen <davidsen@tmr.com>
   We are not out of the woods yet, but we know the direction and have
taken the first step. The steps are many, but finite in number, and if
we persevere we will reach our destination.  -me, 2010



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-06-04  8:13         ` Dominic Raferd
@ 2013-06-07 22:23           ` Bill Davidsen
  2013-06-08 10:22             ` Roman Mamedov
  0 siblings, 1 reply; 18+ messages in thread
From: Bill Davidsen @ 2013-06-07 22:23 UTC (permalink / raw)
  To: Dominic Raferd; +Cc: linux-raid

Dominic Raferd wrote:
>
> On 31/05/2013 09:52, Dominic Raferd wrote:
>> On 31/05/2013 08:54, Roman Mamedov wrote:
>>> On Fri, 31 May 2013 08:47:00 +0100
>>> Dominic Raferd <dominic@timedicer.co.uk> wrote:
>>>
>>>> This is my idea too (see my OP), but I am concerned about optimisation
>>>> (--write-behind, --bitmap and --bitmap-chunk settings) especially for
>>>> writes.
>>>> --write-behind=16384
>>> I think this will not work, you will have to use 16383.
>> Oh, OK, so 16383 is the maximum then?
>>
>>>> --bitmap=/mnt/sda1/write-intent-bitmap.file
>>> Save yourself lots of maintenance headache, just use --bitmap=internal
>>>
>>>> --bitmap-chunk=256M
>>> Looks OK.
>>>
>> Thanks Roman, but the problem with using --bitmap=internal is that, as
>> Neil Brown posted here on another topic a while ago, this requires a
>> synch write to both devices, and the use-case for which write-behind was
>> developed involved an external bitmap. Hence my plan to use external
>> bitmap file on a fast (SSD-based) separate partition - minimises any
>> slow-down caused by having to maintain the write-intent bitmap file.
>>
>
> I would be very grateful if someone could confirm whether, if I set up RAID1 
> and with one of the drives specify --write-mostly --write-behind=n, that 
> maximum 'n' is 16383, and also whether it is permitted in this configuration 
> to set --bitmap=none and thus avoid the overhead of maintaining a write-intent 
> bitmap file? (My thinking is that for my  needs the extra safety provided by 
> the bitmap file is overkill and the slowing effect (and life-shortening of my 
> SSD) might be more significant.) If I have to have a bitmap file, it is 
> presumably faster to have a larger chunk size, is the maximum permitted 256M?

If you want performance I think a too big chunk size will hurt you. And as I 
understand the way repair on a RAID1 is done, without the bitmap you have a 
chance of the older data on the HDD being used to "correct" the likely better 
data on the SDD.

-- 
Bill Davidsen <davidsen@tmr.com>
   We are not out of the woods yet, but we know the direction and have
taken the first step. The steps are many, but finite in number, and if
we persevere we will reach our destination.  -me, 2010



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-06-07 22:23           ` Bill Davidsen
@ 2013-06-08 10:22             ` Roman Mamedov
  2013-06-08 17:11               ` Bill Davidsen
  0 siblings, 1 reply; 18+ messages in thread
From: Roman Mamedov @ 2013-06-08 10:22 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: Dominic Raferd, linux-raid

[-- Attachment #1: Type: text/plain, Size: 491 bytes --]

On Fri, 07 Jun 2013 18:23:56 -0400
Bill Davidsen <davidsen@tmr.com> wrote:

> > If I have to have a bitmap file, it is 
> > presumably faster to have a larger chunk size
> 
> If you want performance I think a too big chunk size will hurt you.

You are confusing the general array chunk sizes, and the bitmap chunk size. In
the latter, a larger size will universally give better performance (up to some
value, where it will perhaps start to flat out).

-- 
With respect,
Roman

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-06-08 10:22             ` Roman Mamedov
@ 2013-06-08 17:11               ` Bill Davidsen
  2013-06-08 21:58                 ` Roberto Spadim
  0 siblings, 1 reply; 18+ messages in thread
From: Bill Davidsen @ 2013-06-08 17:11 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: linux-raid

Roman Mamedov wrote:
> On Fri, 07 Jun 2013 18:23:56 -0400
> Bill Davidsen <davidsen@tmr.com> wrote:
>
>>> If I have to have a bitmap file, it is
>>> presumably faster to have a larger chunk size
>> If you want performance I think a too big chunk size will hurt you.
> You are confusing the general array chunk sizes, and the bitmap chunk size. In
> the latter, a larger size will universally give better performance (up to some
> value, where it will perhaps start to flat out).
>
If that were the case, then a bitmap would not be used, you could just have a 
single bit, and make the chuck size the size of everything. Since that would 
pretty much make the bitmap worthless, it suggests to me that there is in fact a 
limit to the optimal chunk size, short of what the implementation max allows. I 
think the optimal size is determined by usage, but there is probably some way to 
measure it. I find that when it seems a "if some is good more is better" 
approach fits a problem, I like to check that taking it to the limit would make 
sense. The Latin IIRC is 'reducto ad absurbum' and has saved me from occasional 
wretched excess.

-- 
Bill Davidsen <davidsen@tmr.com>
   We are not out of the woods yet, but we know the direction and have
taken the first step. The steps are many, but finite in number, and if
we persevere we will reach our destination.  -me, 2010



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-06-08 17:11               ` Bill Davidsen
@ 2013-06-08 21:58                 ` Roberto Spadim
  2013-06-10  8:57                   ` Dominic Raferd
  0 siblings, 1 reply; 18+ messages in thread
From: Roberto Spadim @ 2013-06-08 21:58 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: Roman Mamedov, Linux-RAID

check this, maybe could help to improve performace (it's not a raid1
solution, it's a cache solution, in kernel 3.9):

http://kernelnewbies.org/Linux_3.9#head-3dbc54b9324d21f06f55299b0a30d6cb06403529

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/Documentation/device-mapper/cache.txt?id=c6b4fcbad044e6fffcc75bba160e720eb8d67d17

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: SSD + Rust as raid1
  2013-06-08 21:58                 ` Roberto Spadim
@ 2013-06-10  8:57                   ` Dominic Raferd
  0 siblings, 0 replies; 18+ messages in thread
From: Dominic Raferd @ 2013-06-10  8:57 UTC (permalink / raw)
  To: Linux-RAID

On 08/06/2013 22:58, Roberto Spadim wrote:
> check this, maybe could help to improve performace (it's not a raid1
> solution, it's a cache solution, in kernel 3.9):
>
> http://kernelnewbies.org/Linux_3.9#head-3dbc54b9324d21f06f55299b0a30d6cb06403529
>
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/Documentation/device-mapper/cache.txt?id=c6b4fcbad044e6fffcc75bba160e720eb8d67d17
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

Thanks to all for input. I have abandoned the attempt to make this 
configuration work - I hit various error messages (latterly: 'Failed to 
RUN_ARRAY /dev/md0 invalid argument') so I am now running without RAID. 
I will look into the cache solution but more likely I will just rsync 
the SSD to the conventional platter disk at regular intervals. (I also 
run rdiff-backup daily to another machine.)

Dominic
-- 
*TimeDicer* <http://www.timedicer.co.uk>: Free File Recovery from Whenever

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2013-06-10  8:57 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-05-30 21:23 SSD + Rust as raid1 Dominic Raferd
2013-05-31  0:22 ` Mathias Burén
2013-05-31  7:02   ` Dominic Raferd
2013-05-31  7:30 ` Roman Mamedov
2013-05-31  7:47   ` Dominic Raferd
2013-05-31  7:54     ` Roman Mamedov
2013-05-31  8:52       ` Dominic Raferd
2013-06-04  8:13         ` Dominic Raferd
2013-06-07 22:23           ` Bill Davidsen
2013-06-08 10:22             ` Roman Mamedov
2013-06-08 17:11               ` Bill Davidsen
2013-06-08 21:58                 ` Roberto Spadim
2013-06-10  8:57                   ` Dominic Raferd
2013-06-01  0:25   ` Stan Hoeppner
2013-06-01  1:19     ` Keith Keller
2013-06-01  4:37       ` Stan Hoeppner
2013-06-07 22:16       ` Bill Davidsen
2013-06-01  1:30     ` Sam Bingner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).