* Is partition alignment needed for RAID partitions ?
@ 2013-12-29 21:04 Pieter De Wit
2013-12-30 6:56 ` Stan Hoeppner
0 siblings, 1 reply; 11+ messages in thread
From: Pieter De Wit @ 2013-12-29 21:04 UTC (permalink / raw)
To: linux-raid
Hi List,
I am taking advantage of the holiday season by testing some NAS devices
for work. This now allows me to rebuild my home storage.
My current setup is as follow:
/dev/sda 250 gig drive (/boot is mounted here since I had no other use
for the drive)
/dev/sdb 2TB drive
/dev/sdc 2TB drive
/dev/sdb and sdc has two MD devices on them, a RAID1 and a RAID0 device.
/dev/sdb and sdc was taken out of a MAC server, so the partition table
is a mess, I want to rebuild them.
All of these house LVM PV's
The RAID1 device contains all my "critical" data ( root device, logs,
photos etc) while the RAID0 device contains all data I have other
sources for.
So my question is, do I need to align the partitions for the raid devices ?
These are desktop grade drives, but for the RAID0 device I saw quite low
throughput (15meg/sec moving data to the NAS via gig connection). I just
created a RAID1 device between /dev/sda and an iSCSI target on the NAS,
and it synced at 48meg/sec, moving data at 30meg/sec - double that of
the RAID0 device. I would have expected the RAID0 device to easily get
up to the 60meg/sec mark ?
Cheers,
Pieter
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Is partition alignment needed for RAID partitions ?
2013-12-29 21:04 Is partition alignment needed for RAID partitions ? Pieter De Wit
@ 2013-12-30 6:56 ` Stan Hoeppner
2013-12-30 8:32 ` Pieter De Wit
0 siblings, 1 reply; 11+ messages in thread
From: Stan Hoeppner @ 2013-12-30 6:56 UTC (permalink / raw)
To: Pieter De Wit, linux-raid
On 12/29/2013 3:04 PM, Pieter De Wit wrote:
> Hi List,
>
> I am taking advantage of the holiday season by testing some NAS devices
> for work. This now allows me to rebuild my home storage.
>
> My current setup is as follow:
>
> /dev/sda 250 gig drive (/boot is mounted here since I had no other use
> for the drive)
> /dev/sdb 2TB drive
> /dev/sdc 2TB drive
>
> /dev/sdb and sdc has two MD devices on them, a RAID1 and a RAID0 device.
> /dev/sdb and sdc was taken out of a MAC server, so the partition table
> is a mess, I want to rebuild them.
>
> All of these house LVM PV's
>
> The RAID1 device contains all my "critical" data ( root device, logs,
> photos etc) while the RAID0 device contains all data I have other
> sources for.
>
> So my question is, do I need to align the partitions for the raid devices ?
You're asking a question without providing sufficient background
information, and in a manner which suggests the answers need to be
lengthy and in "teaching" mode instead of short and concise. In other
words, I cannot provide a yes/no answer to your yes/no question.
Are these 2TB Advanced Format drives? If so your partitions need to
align to 4KiB boundaries, otherwise you'll have RMW within each drive
which can cut your write throughput by 30-50%. So it's a good habit to
always align to 4KiB regardless, as most new drives are Advanced Format.
LBA sectors are 512B in length. It's a good habit to start your first
partition at LBA sector 2048, which is the 1MiB boundary. By adopting
this standard your first partition will be aligned whether the disk
drive is standard 512B/sec, 4096B/sec AF, or an SSD.
So your first partition starts at sector 2048. You should size it, and
all subsequent partitions, such that it is evenly divisible by your
chunk size. Your first array, a RAID1, has no chunk size so this is not
relevant. However, you may as well follow the standard anyway for
consistency. Your second array is a RAID0, and lets assume you'll use a
512KB chunk. If you want two 930GiB (999GB) partitions on each disk
drive, resulting in a ~1TB RAID1 and ~2TB RAID0, then you'll want
something like this, using 512B sectors, if my math is correct:
Start sector End sector
/dev/sdb1 2048 1950355455
/dev/sdb2 1950355456 3900708863
/dev/sdc1 2048 1950355455
/dev/sdc2 1950355456 3900708863
Partition length is 1904642 chunks. This is an aligned partition layout
and should work for your drives. I intentionally undersized these
partitions a bit so we wouldn't spill over on a given drive model. Not
all 2TB drives have exactly or over 2,000,000,000,000 bytes capacity.
You're comparing apples to oranges to grapes below, and your description
lacks any level of technical detail. How are we supposed to analyze this?
> These are desktop grade drives, but for the RAID0 device I saw quite low
> throughput (15meg/sec moving data to the NAS via gig connection). I just
"15meg/sec moving data" means what, a bulk file transfer from a local
filesystem to a remote filesystem? What types of files? Lots of small
ones? Of course throughput will be low. Is the local filesystem
fragmented? Even slower.
> created a RAID1 device between /dev/sda and an iSCSI target on the NAS,
> and it synced at 48meg/sec, moving data at 30meg/sec - double that of
> the RAID0 device.
This is block device data movement. There is no filesystem overhead, no
fragmentation causing excess seeks, and no NFS/CIFS overhead on either
end. Of course it will be faster.
> I would have expected the RAID0 device to easily get
> up to the 60meg/sec mark ?
As the source disk of a bulk file copy over NFS/CIFS? As a point of
reference, I have a workstation that maxes 50MB/s FTP and only 24MB/s
CIFS to/from a server. Both hosts have far in excess of 100MB/s disk
throughput. The 50MB/s limitation is due to the cheap Realtek mobo NIC,
and the 24MB/s is a Samba limit. I've spent dozens of hours attempting
to tweak Samba to greater throughput but it simply isn't capable on that
machine.
Your throughput issues are with your network, not your RAID. Learn and
use FIO to see what your RAID/disks can do. For now a really simple
test is to time cat of a large file and pipe to /dev/null. Divide the
file size by the elapsed time. Or simply do a large read with dd. This
will be much more informative than "moving data to a NAS", where your
throughput is network limited, not disk.
--
Stan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Is partition alignment needed for RAID partitions ?
2013-12-30 6:56 ` Stan Hoeppner
@ 2013-12-30 8:32 ` Pieter De Wit
2013-12-30 10:49 ` Stan Hoeppner
0 siblings, 1 reply; 11+ messages in thread
From: Pieter De Wit @ 2013-12-30 8:32 UTC (permalink / raw)
To: stan, linux-raid
Hi Stan,
Thanks for the long email (I didn't know about advance formatting for
one) - please see my answers inline.
On 30/12/2013 19:56, Stan Hoeppner wrote:
> On 12/29/2013 3:04 PM, Pieter De Wit wrote:
>> <snip>
>> So my question is, do I need to align the partitions for the raid devices ?
> <snip>
> Are these 2TB Advanced Format drives? If so your partitions need to
> align to 4KiB boundaries, otherwise you'll have RMW within each drive
> which can cut your write throughput by 30-50%.
Yes - these drives are, parted printed:
Model: ATA WDC WD20EARX-008 (scsi)
Disk /dev/sdb: 3907029168s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Number Start End Size File system Name Flags
1 2048s 500000767s 499998720s raid
2 500000768s 3907028991s 3407028224s raid
> <snip>
So given your comments then, the start of partition 1 is correct. The
start of partition 2 is also correct (not sure if this is needed), but
the size of partition 2 is incorrect, it should be 3406823424s ?
>
> You're comparing apples to oranges to grapes below, and your description
> lacks any level of technical detail. How are we supposed to analyze this?
>
>> These are desktop grade drives, but for the RAID0 device I saw quite low
>> throughput (15meg/sec moving data to the NAS via gig connection). I just
> "15meg/sec moving data" means what, a bulk file transfer from a local
> filesystem to a remote filesystem? What types of files? Lots of small
> ones? Of course throughput will be low. Is the local filesystem
> fragmented? Even slower.
It's all done with pvmove, which moves 4meg chunks
>
>> created a RAID1 device between /dev/sda and an iSCSI target on the NAS,
>> and it synced at 48meg/sec, moving data at 30meg/sec - double that of
>> the RAID0 device.
> This is block device data movement. There is no filesystem overhead, no
> fragmentation causing excess seeks, and no NFS/CIFS overhead on either
> end. Of course it will be faster.
It was all done with pvmove :)
>
>> I would have expected the RAID0 device to easily get
>> up to the 60meg/sec mark ?
> As the source disk of a bulk file copy over NFS/CIFS? As a point of
> reference, I have a workstation that maxes 50MB/s FTP and only 24MB/s
> CIFS to/from a server. Both hosts have far in excess of 100MB/s disk
> throughput. The 50MB/s limitation is due to the cheap Realtek mobo NIC,
> and the 24MB/s is a Samba limit. I've spent dozens of hours attempting
> to tweak Samba to greater throughput but it simply isn't capable on that
> machine.
>
> Your throughput issues are with your network, not your RAID. Learn and
> use FIO to see what your RAID/disks can do. For now a really simple
> test is to time cat of a large file and pipe to /dev/null. Divide the
> file size by the elapsed time. Or simply do a large read with dd. This
> will be much more informative than "moving data to a NAS", where your
> throughput is network limited, not disk.
>
The system is using a server grade NIC, I will run a dd/network test
shortly after the copy is done. (I am shifting all the data back to the
NAS, incase I mucked up the partitions :) ), I do recall that this
system was able to fill a gig pipe...
Thanks,
Pieter
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Is partition alignment needed for RAID partitions ?
2013-12-30 8:32 ` Pieter De Wit
@ 2013-12-30 10:49 ` Stan Hoeppner
2013-12-30 12:10 ` Pieter De Wit
0 siblings, 1 reply; 11+ messages in thread
From: Stan Hoeppner @ 2013-12-30 10:49 UTC (permalink / raw)
To: Pieter De Wit, linux-raid
On 12/30/2013 2:32 AM, Pieter De Wit wrote:
> Hi Stan,
>
> Thanks for the long email (I didn't know about advance formatting for
> one) - please see my answers inline.
>
> On 30/12/2013 19:56, Stan Hoeppner wrote:
>> On 12/29/2013 3:04 PM, Pieter De Wit wrote:
>>> <snip>
>>> So my question is, do I need to align the partitions for the raid
>>> devices ?
>> <snip>
>> Are these 2TB Advanced Format drives? If so your partitions need to
>> align to 4KiB boundaries, otherwise you'll have RMW within each drive
>> which can cut your write throughput by 30-50%.
> Yes - these drives are, parted printed:
>
> Model: ATA WDC WD20EARX-008 (scsi)
> Disk /dev/sdb: 3907029168s
> Sector size (logical/physical): 512B/4096B
> Partition Table: gpt
>
> Number Start End Size File system Name Flags
> 1 2048s 500000767s 499998720s raid
> 2 500000768s 3907028991s 3407028224s raid
>
>> <snip>
> So given your comments then, the start of partition 1 is correct. The
> start of partition 2 is also correct (not sure if this is needed), but
> the size of partition 2 is incorrect, it should be 3406823424s ?
Size is incorrect in what way? If your RAID0 chunk is 512KiB, then
3407028224 sectors is 3327176 chunks, evenly divisible, so this
partition is fully aligned. Whether the capacity is correct is
something only you can determine. Partition 2 is 1.587 TiB.
>> You're comparing apples to oranges to grapes below, and your description
>> lacks any level of technical detail. How are we supposed to analyze
>> this?
>>
>>> These are desktop grade drives, but for the RAID0 device I saw quite low
>>> throughput (15meg/sec moving data to the NAS via gig connection). I just
>> "15meg/sec moving data" means what, a bulk file transfer from a local
>> filesystem to a remote filesystem? What types of files? Lots of small
>> ones? Of course throughput will be low. Is the local filesystem
>> fragmented? Even slower.
> It's all done with pvmove, which moves 4meg chunks.
I'm not intending to be jerk, but this is a technical mailing list. You
need to be precise so others understand EXACTLY what you're stating.
Your choice of words above suggests you -first- used NFS or CIFS and it
was slow at 15 'meg'/sec (please use MB or MiB appropriately). "NAS" is
Network Attached Storage. The two protocols nearly exclusively used to
communicate with a NAS device are NFS and CIFS.
What you typed may make perfect sense to YOU, but to your audience it is
thoroughly misleading.
>>> created a RAID1 device between /dev/sda and an iSCSI target on the NAS,
>>> and it synced at 48meg/sec, moving data at 30meg/sec - double that of
>>> the RAID0 device.
>> This is block device data movement. There is no filesystem overhead, no
>> fragmentation causing excess seeks, and no NFS/CIFS overhead on either
>> end. Of course it will be faster.
> It was all done with pvmove :)
-Second- you explicitly state here that you then created a RAID1 between
sda and an iSCSI target and achieved 3x the throughput, suggesting that
this is different than the case above.
Again, what you typed may make perfect sense to YOU, but to your
audience it is misleading, because you didn't clearly state the
configuration the first statement describes.
So all of this was done over iSCSI, correct?
Without further data I can only make a wild ass guess as to why the
RAID0 device was slower than the single disk during this -single
operation- you described that involves a network. You didn't post
throughput numbers for the RAID0 doing a local operation so there's
nothing to compare to. It could be due to a dozen different things. A few?
1. Concurrent disk access at the host
2. Concurrent disk access/load at the NAS box
3. One/both of the host EARX drives is flaky causing high latency
4. Flaky GbE HBA or switch port
etc
Show your partition table for sdc. Even if the partitions on it are not
aligned, reads shouldn't be adversely affected by it. Show
$ mdadm --detail
for the RAID0 array. md itself, especially in RAID0 personality, is
simply not going to be the -cause- of low performance. The problem lay
somewhere else. Given the track record of Western Digital's Green
series of drives I'm leaning toward that cause. Post output from
$ smartctl -A /dev/sdb
$ smartctl -A /dev/sdc
>>> I would have expected the RAID0 device to easily get
>>> up to the 60meg/sec mark ?
>> As the source disk of a bulk file copy over NFS/CIFS? As a point of
>> reference, I have a workstation that maxes 50MB/s FTP and only 24MB/s
>> CIFS to/from a server. Both hosts have far in excess of 100MB/s disk
>> throughput. The 50MB/s limitation is due to the cheap Realtek mobo NIC,
>> and the 24MB/s is a Samba limit. I've spent dozens of hours attempting
>> to tweak Samba to greater throughput but it simply isn't capable on that
>> machine.
>>
>> Your throughput issues are with your network, not your RAID. Learn and
>> use FIO to see what your RAID/disks can do. For now a really simple
>> test is to time cat of a large file and pipe to /dev/null. Divide the
>> file size by the elapsed time. Or simply do a large read with dd. This
>> will be much more informative than "moving data to a NAS", where your
>> throughput is network limited, not disk.
>>
> The system is using a server grade NIC, I will run a dd/network test
> shortly after the copy is done. (I am shifting all the data back to the
> NAS, incase I mucked up the partitions :) ), I do recall that this
> system was able to fill a gig pipe...
Now that you've made it clear the first scenario was over iSCSI same as
the 2nd scenario, and not NFS/CIFS, I doubt the TCP stack is the
problem. Assume the network is fine for now and concentrate on the disk
drives in the host. That's seems the most likely cause of the problem
at this point.
BTW, you didn't state the throughput of the RAID1 device on sdb/sdc.
The RAID0 device is on the same disks, yes? RAID0 was 15 MB/s. What
was the RAID1?
--
Stan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Is partition alignment needed for RAID partitions ?
2013-12-30 10:49 ` Stan Hoeppner
@ 2013-12-30 12:10 ` Pieter De Wit
2013-12-30 17:10 ` Stan Hoeppner
0 siblings, 1 reply; 11+ messages in thread
From: Pieter De Wit @ 2013-12-30 12:10 UTC (permalink / raw)
To: stan, linux-raid
Hi Stan,
> Size is incorrect in what way? If your RAID0 chunk is 512KiB, then
> 3407028224 sectors is 3327176 chunks, evenly divisible, so this
> partition is fully aligned. Whether the capacity is correct is
> something only you can determine. Partition 2 is 1.587 TiB.
Would you mind showing me the calc you did to get there,
3407028224/3327176=1024, I don't understand how the 512kiB came into play ?
> I'm not intending to be jerk, but this is a technical mailing list.
Understood - here is the complete layout:
/dev/sda - 250 gig disk
/dev/sdb - 2TB disk
/dev/sdc - 2TB disk
/dev/sdd - 256gig iSCSI target on QNAP NAS (block allocated, not thin
prov'ed)
/dev/sde - 2TB iSCSI target on QNAP NAS (block allocated, not thin prov'ed)
> Show your partition table for sdc. Even if the partitions on it are not
> aligned, reads shouldn't be adversely affected by it. Show
>
> $ mdadm --detail
# parted /dev/sdb unit s print
Model: ATA WDC WD20EARX-008 (scsi)
Disk /dev/sdb: 3907029168s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Number Start End Size File system Name Flags
1 2048s 500000767s 499998720s raid
2 500000768s 3907028991s 3407028224s raid
# parted /dev/sdc unit s print
Model: ATA WDC WD20EARX-008 (scsi)
Disk /dev/sdc: 3907029168s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Number Start End Size File system Name Flags
1 2048s 500000767s 499998720s raid
2 500000768s 3907028991s 3407028224s raid
# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Dec 30 12:33:43 2013
Raid Level : raid1
Array Size : 249868096 (238.29 GiB 255.86 GB)
Used Dev Size : 249868096 (238.29 GiB 255.86 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Dec 31 01:01:42 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : srv01:0 (local to host srv01)
UUID : 45d71ef8:9a1115cb:8ed0c4d9:95d56df4
Events : 25
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Mon Dec 30 12:33:56 2013
Raid Level : raid0
Array Size : 3407027200 (3249.19 GiB 3488.80 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Dec 30 12:33:56 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : srv01:1 (local to host srv01)
UUID : abfdcb5e:804fa119:9c4a8d88:fa2f08a7
Events : 0
Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 34 1 active sync /dev/sdc2
>
> for the RAID0 array. md itself, especially in RAID0 personality, is
> simply not going to be the -cause- of low performance. The problem lay
> somewhere else. Given the track record of Western Digital's Green
> series of drives I'm leaning toward that cause. Post output from
>
> $ smartctl -A /dev/sdb
> $ smartctl -A /dev/sdc
# smartctl -A /dev/sdb
smartctl 6.2 2013-04-20 r3812 [i686-linux-3.11.0-14-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED
WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
Always - 0
3 Spin_Up_Time 0x0027 217 186 021 Pre-fail
Always - 4141
4 Start_Stop_Count 0x0032 100 100 000 Old_age
Always - 102
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age
Always - 0
9 Power_On_Hours 0x0032 089 089 000 Old_age
Always - 8263
10 Spin_Retry_Count 0x0032 100 100 000 Old_age
Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age
Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age
Always - 102
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
Always - 88
193 Load_Cycle_Count 0x0032 155 155 000 Old_age
Always - 135985
194 Temperature_Celsius 0x0022 121 108 000 Old_age
Always - 29
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age
Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
Offline - 0
# smartctl -A /dev/sdc
smartctl 6.2 2013-04-20 r3812 [i686-linux-3.11.0-14-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED
WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
Always - 0
3 Spin_Up_Time 0x0027 217 186 021 Pre-fail
Always - 4141
4 Start_Stop_Count 0x0032 100 100 000 Old_age
Always - 100
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age
Always - 0
9 Power_On_Hours 0x0032 089 089 000 Old_age
Always - 8263
10 Spin_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age
Always - 100
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
Always - 86
193 Load_Cycle_Count 0x0032 156 156 000 Old_age
Always - 134976
194 Temperature_Celsius 0x0022 122 109 000 Old_age
Always - 28
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age
Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
Offline - 0
>>>> I would have expected the RAID0 device to easily get
>>>> up to the 60meg/sec mark ?
>>> As the source disk of a bulk file copy over NFS/CIFS? As a point of
>>> reference, I have a workstation that maxes 50MB/s FTP and only 24MB/s
>>> CIFS to/from a server. Both hosts have far in excess of 100MB/s disk
>>> throughput. The 50MB/s limitation is due to the cheap Realtek mobo NIC,
>>> and the 24MB/s is a Samba limit. I've spent dozens of hours attempting
>>> to tweak Samba to greater throughput but it simply isn't capable on that
>>> machine.
>>>
>>> Your throughput issues are with your network, not your RAID. Learn and
>>> use FIO to see what your RAID/disks can do. For now a really simple
>>> test is to time cat of a large file and pipe to /dev/null. Divide the
>>> file size by the elapsed time. Or simply do a large read with dd. This
>>> will be much more informative than "moving data to a NAS", where your
>>> throughput is network limited, not disk.
>>>
>> The system is using a server grade NIC, I will run a dd/network test
>> shortly after the copy is done. (I am shifting all the data back to the
>> NAS, incase I mucked up the partitions :) ), I do recall that this
>> system was able to fill a gig pipe...
> Now that you've made it clear the first scenario was over iSCSI same as
> the 2nd scenario, and not NFS/CIFS, I doubt the TCP stack is the
> problem. Assume the network is fine for now and concentrate on the disk
> drives in the host. That's seems the most likely cause of the problem
> at this point.
>
> BTW, you didn't state the throughput of the RAID1 device on sdb/sdc.
> The RAID0 device is on the same disks, yes? RAID0 was 15 MB/s. What
> was the RAID1?
>
ATM, the data is still moving back to the NAS (from the RAID1 device).
According to iostat, this is reading at +30000 kB/s (all of my numbers
are from iostat -x)
Also, there is no other disk usage in the system. All the data is
currently on the NAS (except system "stuff" for a quite firewall)
I just spotted another thing, the two drives are on the same SATA
controller, from rescan-scsi-bus:
Scanning for device 3 0 0 0 ...
OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: WDC WD20EARX-008 Rev: 51.0
Type: Direct-Access ANSI SCSI revision: 05
Scanning for device 3 0 1 0 ...
OLD: Host: scsi3 Channel: 00 Id: 01 Lun: 00
Vendor: ATA Model: WDC WD20EARX-008 Rev: 51.0
Type: Direct-Access ANSI SCSI revision: 05
Would it be better to move these apart ? I remember IDE used to have
this issue, but I also recall SATA "fixed" that.
Thanks again,
Pieter
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Is partition alignment needed for RAID partitions ?
2013-12-30 12:10 ` Pieter De Wit
@ 2013-12-30 17:10 ` Stan Hoeppner
2013-12-30 18:32 ` Pieter De Wit
2013-12-31 1:05 ` Pieter De Wit
0 siblings, 2 replies; 11+ messages in thread
From: Stan Hoeppner @ 2013-12-30 17:10 UTC (permalink / raw)
To: Pieter De Wit, linux-raid
On 12/30/2013 6:10 AM, Pieter De Wit wrote:
> Hi Stan,
>> Size is incorrect in what way? If your RAID0 chunk is 512KiB, then
>> 3407028224 sectors is 3327176 chunks, evenly divisible, so this
>> partition is fully aligned. Whether the capacity is correct is
>> something only you can determine. Partition 2 is 1.587 TiB.
> Would you mind showing me the calc you did to get there,
> 3407028224/3327176=1024,
(3407028224 sectors * 512 bytes per sector) / 524288 (chunk bytes) =
3327176 chunks
> I don't understand how the 512kiB came into play ?
> # mdadm --detail /dev/md1
...
> Chunk Size : 512K
One kilobyte (K,KB) is 2^10, or 1024 bytes. 512*1024 = 524288 bytes
>> I'm not intending to be jerk, but this is a technical mailing list.
> Understood - here is the complete layout:
>
> /dev/sda - 250 gig disk
> /dev/sdb - 2TB disk
> /dev/sdc - 2TB disk
> /dev/sdd - 256gig iSCSI target on QNAP NAS (block allocated, not thin
> prov'ed)
> /dev/sde - 2TB iSCSI target on QNAP NAS (block allocated, not thin prov'ed)
>> Show your partition table for sdc. Even if the partitions on it are not
>> aligned, reads shouldn't be adversely affected by it. Show
>>
>> $ mdadm --detail
> # parted /dev/sdb unit s print
> Model: ATA WDC WD20EARX-008 (scsi)
> Disk /dev/sdb: 3907029168s
> Sector size (logical/physical): 512B/4096B
> Partition Table: gpt
>
> Number Start End Size File system Name Flags
> 1 2048s 500000767s 499998720s raid
> 2 500000768s 3907028991s 3407028224s raid
>
> # parted /dev/sdc unit s print
> Model: ATA WDC WD20EARX-008 (scsi)
> Disk /dev/sdc: 3907029168s
> Sector size (logical/physical): 512B/4096B
> Partition Table: gpt
>
> Number Start End Size File system Name Flags
> 1 2048s 500000767s 499998720s raid
> 2 500000768s 3907028991s 3407028224s raid
These partitions are all aligned and the same sizes. No problems here.
>
> # mdadm --detail /dev/md0
> /dev/md0:
> Version : 1.2
> Creation Time : Mon Dec 30 12:33:43 2013
> Raid Level : raid1
> Array Size : 249868096 (238.29 GiB 255.86 GB)
> Used Dev Size : 249868096 (238.29 GiB 255.86 GB)
> Raid Devices : 2
> Total Devices : 2
> Persistence : Superblock is persistent
>
> Update Time : Tue Dec 31 01:01:42 2013
> State : clean
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
>
> Name : srv01:0 (local to host srv01)
> UUID : 45d71ef8:9a1115cb:8ed0c4d9:95d56df4
> Events : 25
>
> Number Major Minor RaidDevice State
> 0 8 17 0 active sync /dev/sdb1
> 1 8 33 1 active sync /dev/sdc1
>
> # mdadm --detail /dev/md1
> /dev/md1:
> Version : 1.2
> Creation Time : Mon Dec 30 12:33:56 2013
> Raid Level : raid0
> Array Size : 3407027200 (3249.19 GiB 3488.80 GB)
> Raid Devices : 2
> Total Devices : 2
> Persistence : Superblock is persistent
>
> Update Time : Mon Dec 30 12:33:56 2013
> State : clean
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
>
> Chunk Size : 512K
>
> Name : srv01:1 (local to host srv01)
> UUID : abfdcb5e:804fa119:9c4a8d88:fa2f08a7
> Events : 0
>
> Number Major Minor RaidDevice State
> 0 8 18 0 active sync /dev/sdb2
> 1 8 34 1 active sync /dev/sdc2
>
>>
>> for the RAID0 array. md itself, especially in RAID0 personality, is
>> simply not going to be the -cause- of low performance. The problem lay
>> somewhere else. Given the track record of Western Digital's Green
>> series of drives I'm leaning toward that cause. Post output from
>>
>> $ smartctl -A /dev/sdb
>> $ smartctl -A /dev/sdc
> # smartctl -A /dev/sdb
> smartctl 6.2 2013-04-20 r3812 [i686-linux-3.11.0-14-generic] (local build)
> Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
>
> === START OF READ SMART DATA SECTION ===
> SMART Attributes Data Structure revision number: 16
> Vendor Specific SMART Attributes with Thresholds:
> ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED
> WHEN_FAILED RAW_VALUE
> 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
> Always - 0
> 3 Spin_Up_Time 0x0027 217 186 021 Pre-fail
> Always - 4141
> 4 Start_Stop_Count 0x0032 100 100 000 Old_age
> Always - 102
> 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
> Always - 0
> 7 Seek_Error_Rate 0x002e 200 200 000 Old_age
> Always - 0
> 9 Power_On_Hours 0x0032 089 089 000 Old_age
> Always - 8263
> 10 Spin_Retry_Count 0x0032 100 100 000 Old_age
> Always - 0
> 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age
> Always - 0
> 12 Power_Cycle_Count 0x0032 100 100 000 Old_age
> Always - 102
> 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
> Always - 88
> 193 Load_Cycle_Count 0x0032 155 155 000 Old_age
> Always - 135985
> 194 Temperature_Celsius 0x0022 121 108 000 Old_age
> Always - 29
> 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
> Always - 0
> 197 Current_Pending_Sector 0x0032 200 200 000 Old_age
> Always - 0
> 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
> Offline - 0
> 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
> Always - 0
> 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
> Offline - 0
>
> # smartctl -A /dev/sdc
> smartctl 6.2 2013-04-20 r3812 [i686-linux-3.11.0-14-generic] (local build)
> Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
>
> === START OF READ SMART DATA SECTION ===
> SMART Attributes Data Structure revision number: 16
> Vendor Specific SMART Attributes with Thresholds:
> ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED
> WHEN_FAILED RAW_VALUE
> 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
> Always - 0
> 3 Spin_Up_Time 0x0027 217 186 021 Pre-fail
> Always - 4141
> 4 Start_Stop_Count 0x0032 100 100 000 Old_age
> Always - 100
> 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
> Always - 0
> 7 Seek_Error_Rate 0x002e 200 200 000 Old_age
> Always - 0
> 9 Power_On_Hours 0x0032 089 089 000 Old_age
> Always - 8263
> 10 Spin_Retry_Count 0x0032 100 253 000 Old_age
> Always - 0
> 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age
> Always - 0
> 12 Power_Cycle_Count 0x0032 100 100 000 Old_age
> Always - 100
> 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
> Always - 86
> 193 Load_Cycle_Count 0x0032 156 156 000 Old_age
> Always - 134976
> 194 Temperature_Celsius 0x0022 122 109 000 Old_age
> Always - 28
> 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
> Always - 0
> 197 Current_Pending_Sector 0x0032 200 200 000 Old_age
> Always - 0
> 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
> Offline - 0
> 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
> Always - 0
> 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
> Offline - 0
smartctl data indicates there are no problems with the drives.
>>>>> I would have expected the RAID0 device to easily get
>>>>> up to the 60meg/sec mark ?
>>>> As the source disk of a bulk file copy over NFS/CIFS? As a point of
>>>> reference, I have a workstation that maxes 50MB/s FTP and only 24MB/s
>>>> CIFS to/from a server. Both hosts have far in excess of 100MB/s disk
>>>> throughput. The 50MB/s limitation is due to the cheap Realtek mobo
>>>> NIC,
>>>> and the 24MB/s is a Samba limit. I've spent dozens of hours attempting
>>>> to tweak Samba to greater throughput but it simply isn't capable on
>>>> that
>>>> machine.
>>>>
>>>> Your throughput issues are with your network, not your RAID. Learn and
>>>> use FIO to see what your RAID/disks can do. For now a really simple
>>>> test is to time cat of a large file and pipe to /dev/null. Divide the
>>>> file size by the elapsed time. Or simply do a large read with dd.
>>>> This
>>>> will be much more informative than "moving data to a NAS", where your
>>>> throughput is network limited, not disk.
>>>>
>>> The system is using a server grade NIC, I will run a dd/network test
>>> shortly after the copy is done. (I am shifting all the data back to the
>>> NAS, incase I mucked up the partitions :) ), I do recall that this
>>> system was able to fill a gig pipe...
>> Now that you've made it clear the first scenario was over iSCSI same as
>> the 2nd scenario, and not NFS/CIFS, I doubt the TCP stack is the
>> problem. Assume the network is fine for now and concentrate on the disk
>> drives in the host. That's seems the most likely cause of the problem
>> at this point.
>>
>> BTW, you didn't state the throughput of the RAID1 device on sdb/sdc.
>> The RAID0 device is on the same disks, yes? RAID0 was 15 MB/s. What
>> was the RAID1?
>>
> ATM, the data is still moving back to the NAS (from the RAID1 device).
> According to iostat, this is reading at +30000 kB/s (all of my numbers
> are from iostat -x)
Please show the exact iostat command line you are using and the output.
> Also, there is no other disk usage in the system. All the data is
> currently on the NAS (except system "stuff" for a quite firewall)
>
> I just spotted another thing, the two drives are on the same SATA
> controller, from rescan-scsi-bus:
>
> Scanning for device 3 0 0 0 ...
> OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 00
> Vendor: ATA Model: WDC WD20EARX-008 Rev: 51.0
> Type: Direct-Access ANSI SCSI revision: 05
> Scanning for device 3 0 1 0 ...
> OLD: Host: scsi3 Channel: 00 Id: 01 Lun: 00
> Vendor: ATA Model: WDC WD20EARX-008 Rev: 51.0
> Type: Direct-Access ANSI SCSI revision: 05
>
> Would it be better to move these apart ? I remember IDE used to have
> this issue, but I also recall SATA "fixed" that.
This isn't the problem. Even if both drives were connected via a plain
old 33MHz 132MB/s PCI SATA card you'd still be capable of 120MB/s
throughput, 60MB/s per drive.
> Thanks again,
You're welcome. Eventually you get to the bottom of this.
--
Stan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Is partition alignment needed for RAID partitions ?
2013-12-30 17:10 ` Stan Hoeppner
@ 2013-12-30 18:32 ` Pieter De Wit
2013-12-31 14:21 ` Stan Hoeppner
2013-12-31 1:05 ` Pieter De Wit
1 sibling, 1 reply; 11+ messages in thread
From: Pieter De Wit @ 2013-12-30 18:32 UTC (permalink / raw)
To: stan, linux-raid
Hi Stan,
> (3407028224 sectors * 512 bytes per sector) / 524288 (chunk bytes) =
>
> 3327176 chunks
Right - more for clarity, these are the 512 byte sectors, not the 4k
ones (otherwise I would have had a 12 TB drive :) )
> Please show the exact iostat command line you are using and the output.
iostat -x 1
>> Also, there is no other disk usage in the system. All the data is
>> currently on the NAS (except system "stuff" for a quite firewall)
>>
>> I just spotted another thing, the two drives are on the same SATA
>> controller, from rescan-scsi-bus:
>>
>> Scanning for device 3 0 0 0 ...
>> OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 00
>> Vendor: ATA Model: WDC WD20EARX-008 Rev: 51.0
>> Type: Direct-Access ANSI SCSI revision: 05
>> Scanning for device 3 0 1 0 ...
>> OLD: Host: scsi3 Channel: 00 Id: 01 Lun: 00
>> Vendor: ATA Model: WDC WD20EARX-008 Rev: 51.0
>> Type: Direct-Access ANSI SCSI revision: 05
>>
>> Would it be better to move these apart ? I remember IDE used to have
>> this issue, but I also recall SATA "fixed" that.
> This isn't the problem. Even if both drives were connected via a plain
> old 33MHz 132MB/s PCI SATA card you'd still be capable of 120MB/s
> throughput, 60MB/s per drive.
>
>> Thanks again,
> You're welcome. Eventually you get to the bottom of this.
And the email :) I now have the drive with no data on them, so I can
even run write tests. I am going to start with the usual "dd" tests, any
other that you would like to see ?
Cheers,
Pieter
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Is partition alignment needed for RAID partitions ?
2013-12-30 17:10 ` Stan Hoeppner
2013-12-30 18:32 ` Pieter De Wit
@ 2013-12-31 1:05 ` Pieter De Wit
2013-12-31 14:38 ` Stan Hoeppner
2014-01-02 19:49 ` Phillip Susi
1 sibling, 2 replies; 11+ messages in thread
From: Pieter De Wit @ 2013-12-31 1:05 UTC (permalink / raw)
To: stan, linux-raid
Hi Stan,
> You're welcome. Eventually you get to the bottom of this.
>
I think this moment has arrived :)
I did some fio tests from all levels, raw disk, raw md device, raw LV,
filesystem (ext4) on top of LV. All read and write tests came back with
+- 120MB/s (mostly 128, but let's tone it down just for stats)
I did a read,write and mixed test against the NAS:
The read and write tests came back at 60MB/s, the mixed came in at 11MB/s.
I ran a network test from my desktop to the server, using iperf and it
got up to 80MB/s, add to that the sync traffic of 10-15MB and you have a
pretty full gig pipe. The desktop nic is a crappy onboard one, so I am
quite happy with those stats. Just "FYI" - they are all linked with a
Cisco 3750G, the NAS has 2xgig ports in an etherchannel
This leaves me only two conclusions:
1) pvmove isn't as fast as I think - it might be due to some checksums
or some other process (I know it creates a small mirror of a PE and then
breaks it - rinse repeat for all PEs)
2) That is just the limit for this system, "it is what it is"
Either way - I think I have taken up enough of the list time, so thank
you very much for the in-depth answers ! I still have access to the NAS
until at least 13/01/2014 if you want to do more checks/tests.
Have a good 2014!
Cheers,
Pieter
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Is partition alignment needed for RAID partitions ?
2013-12-30 18:32 ` Pieter De Wit
@ 2013-12-31 14:21 ` Stan Hoeppner
0 siblings, 0 replies; 11+ messages in thread
From: Stan Hoeppner @ 2013-12-31 14:21 UTC (permalink / raw)
To: Pieter De Wit, linux-raid
On 12/30/2013 12:32 PM, Pieter De Wit wrote:
> Hi Stan,
>
>> (3407028224 sectors * 512 bytes per sector) / 524288 (chunk bytes) =
>>
>> 3327176 chunks
> Right - more for clarity, these are the 512 byte sectors, not the 4k
> ones (otherwise I would have had a 12 TB drive :) )
Linux works only with 512B sectors at this time, thus all the
partitioning tools use 512B sectors. At some point in the future we may
see native 4K/sector devices and native Linux support for those.
>> Please show the exact iostat command line you are using and the output.
> iostat -x 1
You're polling once every second so the 15 MB/s isn't due to averaging.
>>> Also, there is no other disk usage in the system. All the data is
>>> currently on the NAS (except system "stuff" for a quite firewall)
>>>
>>> I just spotted another thing, the two drives are on the same SATA
>>> controller, from rescan-scsi-bus:
>>>
>>> Scanning for device 3 0 0 0 ...
>>> OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 00
>>> Vendor: ATA Model: WDC WD20EARX-008 Rev: 51.0
>>> Type: Direct-Access ANSI SCSI revision: 05
>>> Scanning for device 3 0 1 0 ...
>>> OLD: Host: scsi3 Channel: 00 Id: 01 Lun: 00
>>> Vendor: ATA Model: WDC WD20EARX-008 Rev: 51.0
>>> Type: Direct-Access ANSI SCSI revision: 05
>>>
>>> Would it be better to move these apart ? I remember IDE used to have
>>> this issue, but I also recall SATA "fixed" that.
>>
>> This isn't the problem. Even if both drives were connected via a plain
>> old 33MHz 132MB/s PCI SATA card you'd still be capable of 120MB/s
>> throughput, 60MB/s per drive.
>>
>>> Thanks again,
>> You're welcome. Eventually you get to the bottom of this.
>>
> And the email :) I now have the drive with no data on them, so I can
> even run write tests. I am going to start with the usual "dd" tests, any
> other that you would like to see ?
dd will give you a rough idea of the single streaming throughput of the
array, which will be lower than its maximum throughput due to
serialization and latency. If you want to see the max the array can do
use FIO as it can do both parallel and async IO.
--
Stan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Is partition alignment needed for RAID partitions ?
2013-12-31 1:05 ` Pieter De Wit
@ 2013-12-31 14:38 ` Stan Hoeppner
2014-01-02 19:49 ` Phillip Susi
1 sibling, 0 replies; 11+ messages in thread
From: Stan Hoeppner @ 2013-12-31 14:38 UTC (permalink / raw)
To: Pieter De Wit, linux-raid
On 12/30/2013 7:05 PM, Pieter De Wit wrote:
> Hi Stan,
>> You're welcome. Eventually you get to the bottom of this.
>>
> I think this moment has arrived :)
>
> I did some fio tests from all levels, raw disk, raw md device, raw LV,
> filesystem (ext4) on top of LV. All read and write tests came back with
> +- 120MB/s (mostly 128, but let's tone it down just for stats)
>
> I did a read,write and mixed test against the NAS:
>
> The read and write tests came back at 60MB/s, the mixed came in at 11MB/s.
It seams clear the QNAP device is your weak link.
> I ran a network test from my desktop to the server, using iperf and it
> got up to 80MB/s, add to that the sync traffic of 10-15MB and you have a
> pretty full gig pipe. The desktop nic is a crappy onboard one, so I am
> quite happy with those stats. Just "FYI" - they are all linked with a
> Cisco 3750G, the NAS has 2xgig ports in an etherchannel
Etherchannel is proprietary to Cisco. I'd be surprised if the QNAP
supports it. 802.11ad maybe, but not Etherchannel. In any case it's
not relevant here as your pvmove operations are a single TCP stream.
Single streams always transmit over a single link, unless you have two
Linux hosts with 2 links each doing mode balance-rr with source based
routing.
> This leaves me only two conclusions:
>
> 1) pvmove isn't as fast as I think - it might be due to some checksums
> or some other process (I know it creates a small mirror of a PE and then
> breaks it - rinse repeat for all PEs)
Try it between different disks within the host. It should be much quicker.
> 2) That is just the limit for this system, "it is what it is"
I don't think the limit is your host but the QNAP.
> Either way - I think I have taken up enough of the list time, so thank
> you very much for the in-depth answers ! I still have access to the NAS
> until at least 13/01/2014 if you want to do more checks/tests.
More tests on the QNAP aren't necessary. We already know it's slow.
> Have a good 2014!
You too.
--
Stan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Is partition alignment needed for RAID partitions ?
2013-12-31 1:05 ` Pieter De Wit
2013-12-31 14:38 ` Stan Hoeppner
@ 2014-01-02 19:49 ` Phillip Susi
1 sibling, 0 replies; 11+ messages in thread
From: Phillip Susi @ 2014-01-02 19:49 UTC (permalink / raw)
To: Pieter De Wit, stan, linux-raid
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 12/30/2013 8:05 PM, Pieter De Wit wrote:
> 1) pvmove isn't as fast as I think - it might be due to some
> checksums or some other process (I know it creates a small mirror
> of a PE and then breaks it - rinse repeat for all PEs)
It doesn't actually work this way. The man page makes it sound like
it does ( though not for every individual PE -- it uses the nebulous
term "checkpoint" ), but it turns out that in practice, there's no
such thing and it just makes one big mirror to sync as many contiguous
PEs as it can. This can be rather annoying when you decide to reboot
85% of the way through a 1 TB pvmove and it starts over at the beginning.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (MingW32)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBAgAGBQJSxcLhAAoJEI5FoCIzSKrw4N8H/i5M74R1CJQTbptxE5lBelLP
/ESQb/zKIDD4eEodmg4oowh260trV6u91+WBbPYcZVtQ1bPK/4kMkuH8pRVKpnvw
nQghIjKGyaPAfzGHpHxyqM/xibIwHl3doTM+Ld17jjilyYVfv/QcERn9QjYSG2pM
Isl3H+C51BMQcOPdCHUeNml5mqBxA6QB3eYkSXvtKVkbBRs1r+nOJObizwrRld5B
gFDGRmEbi40GpROCbxFPDZPqpz5wKygf+/PFm1VHt1Hsiib5rndabcsJdB9H8SqK
PDw33Qa7GRTkpWdXRhDnxJq3uXNXnF74IGtTPYS2eTZL+ljORgMq24BMo5bPLl8=
=HgAd
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2014-01-02 19:49 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-29 21:04 Is partition alignment needed for RAID partitions ? Pieter De Wit
2013-12-30 6:56 ` Stan Hoeppner
2013-12-30 8:32 ` Pieter De Wit
2013-12-30 10:49 ` Stan Hoeppner
2013-12-30 12:10 ` Pieter De Wit
2013-12-30 17:10 ` Stan Hoeppner
2013-12-30 18:32 ` Pieter De Wit
2013-12-31 14:21 ` Stan Hoeppner
2013-12-31 1:05 ` Pieter De Wit
2013-12-31 14:38 ` Stan Hoeppner
2014-01-02 19:49 ` Phillip Susi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).