public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Re: RAID5 created by 8 disks works with xfs
       [not found]                       ` <CACwgYDOtCoVF-p+KKqPYxHhA4vWF78Ueecx9hcVWLoyxFWzV9Q@mail.gmail.com>
@ 2012-04-05 21:01                         ` Stan Hoeppner
  2012-04-06  0:25                           ` daobang wang
  0 siblings, 1 reply; 10+ messages in thread
From: Stan Hoeppner @ 2012-04-05 21:01 UTC (permalink / raw)
  To: daobang wang
  Cc: Marcus Sorensen, linux-raid, n, =?ISO-8859-1?Q?Mathias_Bur=E9?=,
	xfs@oss.sgi.com

On 4/5/2012 1:48 AM, daobang wang wrote:
> Hi stan,
> 
>      I duplicated the input/output error issue, about the detail
> operations and logs, please see the attachments, Is there any way to
> fix this? thanks!
> 
> Best Regards,
> Daobang Wang.


These fiilesystem issues have nothing to do with linux-raid.  I'm
copying the XFS mailing list which is where this discussion should be
taking place from this point forward.  Please reply-to-all, and paste
the output you previously attached, but inline this time.

Also, since the XFS folks are unfamiliar with what you're doing up to
this point, please provide a basic description of your hardware/storage
setup, kernel version, mdraid configuration, xfs_info output as well as
your fstab XFS mount options, and a description of your workload.

My best guess at this point as to the du and ls errors is that your
application is not behaving properly, or you're still running with XFS
barriers disabled, which, I say _loudly_ for the 2nd time, you should
NOT do in the absence of BBWC, which you stated you do not have.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RAID5 created by 8 disks works with xfs
  2012-04-05 21:01                         ` RAID5 created by 8 disks works with xfs Stan Hoeppner
@ 2012-04-06  0:25                           ` daobang wang
  2012-04-06  2:33                             ` daobang wang
  0 siblings, 1 reply; 10+ messages in thread
From: daobang wang @ 2012-04-06  0:25 UTC (permalink / raw)
  To: stan; +Cc: Marcus Sorensen, linux-raid, Mathias Burén, xfs@oss.sgi.com

Hi All,

    I have found the solution, i updated the xfsprogs from 2.10.1 to
3.1.5, and could repair it, Thanks.

Best Wishes,
Daobang Wang.

On 4/6/12, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> On 4/5/2012 1:48 AM, daobang wang wrote:
>> Hi stan,
>>
>>      I duplicated the input/output error issue, about the detail
>> operations and logs, please see the attachments, Is there any way to
>> fix this? thanks!
>>
>> Best Regards,
>> Daobang Wang.
>
>
> These fiilesystem issues have nothing to do with linux-raid.  I'm
> copying the XFS mailing list which is where this discussion should be
> taking place from this point forward.  Please reply-to-all, and paste
> the output you previously attached, but inline this time.
>
> Also, since the XFS folks are unfamiliar with what you're doing up to
> this point, please provide a basic description of your hardware/storage
> setup, kernel version, mdraid configuration, xfs_info output as well as
> your fstab XFS mount options, and a description of your workload.
>
> My best guess at this point as to the du and ls errors is that your
> application is not behaving properly, or you're still running with XFS
> barriers disabled, which, I say _loudly_ for the 2nd time, you should
> NOT do in the absence of BBWC, which you stated you do not have.
>
> --
> Stan
>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RAID5 created by 8 disks works with xfs
  2012-04-06  0:25                           ` daobang wang
@ 2012-04-06  2:33                             ` daobang wang
  2012-04-06  6:00                               ` Jack Wang
  0 siblings, 1 reply; 10+ messages in thread
From: daobang wang @ 2012-04-06  2:33 UTC (permalink / raw)
  To: stan; +Cc: Marcus Sorensen, linux-raid, Mathias Burén, xfs@oss.sgi.com

There is another issue, i updated xfsprog from 2.10.1 to 3.1.5, and
found i could not make the xfs filesystem when the logical volume size
large than 8TB with command mkfs.xfs -f -i size=512
/dev/vg+vg00+20120406101850/lv+nxx+lv0000, this command seems hang, it
did not return for a long time, is there any parameter i should
adjust?

Thank you very much

Best Regards,
Daobang Wang.

On 4/6/12, daobang wang <wangdb1981@gmail.com> wrote:
> Hi All,
>
>     I have found the solution, i updated the xfsprogs from 2.10.1 to
> 3.1.5, and could repair it, Thanks.
>
> Best Wishes,
> Daobang Wang.
>
> On 4/6/12, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>> On 4/5/2012 1:48 AM, daobang wang wrote:
>>> Hi stan,
>>>
>>>      I duplicated the input/output error issue, about the detail
>>> operations and logs, please see the attachments, Is there any way to
>>> fix this? thanks!
>>>
>>> Best Regards,
>>> Daobang Wang.
>>
>>
>> These fiilesystem issues have nothing to do with linux-raid.  I'm
>> copying the XFS mailing list which is where this discussion should be
>> taking place from this point forward.  Please reply-to-all, and paste
>> the output you previously attached, but inline this time.
>>
>> Also, since the XFS folks are unfamiliar with what you're doing up to
>> this point, please provide a basic description of your hardware/storage
>> setup, kernel version, mdraid configuration, xfs_info output as well as
>> your fstab XFS mount options, and a description of your workload.
>>
>> My best guess at this point as to the du and ls errors is that your
>> application is not behaving properly, or you're still running with XFS
>> barriers disabled, which, I say _loudly_ for the 2nd time, you should
>> NOT do in the absence of BBWC, which you stated you do not have.
>>
>> --
>> Stan
>>
>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RAID5 created by 8 disks works with xfs
  2012-04-06  2:33                             ` daobang wang
@ 2012-04-06  6:00                               ` Jack Wang
  2012-04-06  6:45                                 ` daobang wang
  0 siblings, 1 reply; 10+ messages in thread
From: Jack Wang @ 2012-04-06  6:00 UTC (permalink / raw)
  To: daobang wang
  Cc: Marcus Sorensen, linux-raid, Mathias Burén, stan,
	xfs@oss.sgi.com

using bigger inode size or inode64 does help?

So your environment is NVR software running in Linux ,100+ D1 streamer
directly write to filesystem on top of 16 SATA disks (with raid5 and
vg)?

2012/4/6 daobang wang <wangdb1981@gmail.com>:
> There is another issue, i updated xfsprog from 2.10.1 to 3.1.5, and
> found i could not make the xfs filesystem when the logical volume size
> large than 8TB with command mkfs.xfs -f -i size=512
> /dev/vg+vg00+20120406101850/lv+nxx+lv0000, this command seems hang, it
> did not return for a long time, is there any parameter i should
> adjust?
>
> Thank you very much
>
> Best Regards,
> Daobang Wang.
>
> On 4/6/12, daobang wang <wangdb1981@gmail.com> wrote:
>> Hi All,
>>
>>     I have found the solution, i updated the xfsprogs from 2.10.1 to
>> 3.1.5, and could repair it, Thanks.
>>
>> Best Wishes,
>> Daobang Wang.
>>
>> On 4/6/12, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>>> On 4/5/2012 1:48 AM, daobang wang wrote:
>>>> Hi stan,
>>>>
>>>>      I duplicated the input/output error issue, about the detail
>>>> operations and logs, please see the attachments, Is there any way to
>>>> fix this? thanks!
>>>>
>>>> Best Regards,
>>>> Daobang Wang.
>>>
>>>
>>> These fiilesystem issues have nothing to do with linux-raid.  I'm
>>> copying the XFS mailing list which is where this discussion should be
>>> taking place from this point forward.  Please reply-to-all, and paste
>>> the output you previously attached, but inline this time.
>>>
>>> Also, since the XFS folks are unfamiliar with what you're doing up to
>>> this point, please provide a basic description of your hardware/storage
>>> setup, kernel version, mdraid configuration, xfs_info output as well as
>>> your fstab XFS mount options, and a description of your workload.
>>>
>>> My best guess at this point as to the du and ls errors is that your
>>> application is not behaving properly, or you're still running with XFS
>>> barriers disabled, which, I say _loudly_ for the 2nd time, you should
>>> NOT do in the absence of BBWC, which you stated you do not have.
>>>
>>> --
>>> Stan
>>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RAID5 created by 8 disks works with xfs
  2012-04-06  6:00                               ` Jack Wang
@ 2012-04-06  6:45                                 ` daobang wang
  2012-04-06  6:49                                   ` daobang wang
  0 siblings, 1 reply; 10+ messages in thread
From: daobang wang @ 2012-04-06  6:45 UTC (permalink / raw)
  To: Jack Wang
  Cc: Marcus Sorensen, linux-raid, Mathias Burén, stan,
	xfs@oss.sgi.com

Hi Jacky,

     Yes, the environment is like your description, i will try with
your suggestion. thanks a lot.

On 4/6/12, Jack Wang <jack.wang.usish@gmail.com> wrote:
> using bigger inode size or inode64 does help?
>
> So your environment is NVR software running in Linux ,100+ D1 streamer
> directly write to filesystem on top of 16 SATA disks (with raid5 and
> vg)?
>
> 2012/4/6 daobang wang <wangdb1981@gmail.com>:
>> There is another issue, i updated xfsprog from 2.10.1 to 3.1.5, and
>> found i could not make the xfs filesystem when the logical volume size
>> large than 8TB with command mkfs.xfs -f -i size=512
>> /dev/vg+vg00+20120406101850/lv+nxx+lv0000, this command seems hang, it
>> did not return for a long time, is there any parameter i should
>> adjust?
>>
>> Thank you very much
>>
>> Best Regards,
>> Daobang Wang.
>>
>> On 4/6/12, daobang wang <wangdb1981@gmail.com> wrote:
>>> Hi All,
>>>
>>>     I have found the solution, i updated the xfsprogs from 2.10.1 to
>>> 3.1.5, and could repair it, Thanks.
>>>
>>> Best Wishes,
>>> Daobang Wang.
>>>
>>> On 4/6/12, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>>>> On 4/5/2012 1:48 AM, daobang wang wrote:
>>>>> Hi stan,
>>>>>
>>>>>      I duplicated the input/output error issue, about the detail
>>>>> operations and logs, please see the attachments, Is there any way to
>>>>> fix this? thanks!
>>>>>
>>>>> Best Regards,
>>>>> Daobang Wang.
>>>>
>>>>
>>>> These fiilesystem issues have nothing to do with linux-raid.  I'm
>>>> copying the XFS mailing list which is where this discussion should be
>>>> taking place from this point forward.  Please reply-to-all, and paste
>>>> the output you previously attached, but inline this time.
>>>>
>>>> Also, since the XFS folks are unfamiliar with what you're doing up to
>>>> this point, please provide a basic description of your hardware/storage
>>>> setup, kernel version, mdraid configuration, xfs_info output as well as
>>>> your fstab XFS mount options, and a description of your workload.
>>>>
>>>> My best guess at this point as to the du and ls errors is that your
>>>> application is not behaving properly, or you're still running with XFS
>>>> barriers disabled, which, I say _loudly_ for the 2nd time, you should
>>>> NOT do in the absence of BBWC, which you stated you do not have.
>>>>
>>>> --
>>>> Stan
>>>>
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RAID5 created by 8 disks works with xfs
  2012-04-06  6:45                                 ` daobang wang
@ 2012-04-06  6:49                                   ` daobang wang
  2012-04-06  8:18                                     ` Stan Hoeppner
  0 siblings, 1 reply; 10+ messages in thread
From: daobang wang @ 2012-04-06  6:49 UTC (permalink / raw)
  To: Jack Wang
  Cc: Marcus Sorensen, linux-raid, Mathias Burén, stan,
	xfs@oss.sgi.com

I did more tests, and found the mkfs.xfs did not hang.

I tried to make the xfs on 7TB logical volume, both 2.10.1 and 3.1.5
mkfs.xfs -f dev will work well, less than one minute.

But when i tried to make xfs on 8TB logical volume, 2.10.1 works well,
less than 1 minute, but 3.1.5 needs 5 minutes.

On 4/6/12, daobang wang <wangdb1981@gmail.com> wrote:
> Hi Jacky,
>
>      Yes, the environment is like your description, i will try with
> your suggestion. thanks a lot.
>
> On 4/6/12, Jack Wang <jack.wang.usish@gmail.com> wrote:
>> using bigger inode size or inode64 does help?
>>
>> So your environment is NVR software running in Linux ,100+ D1 streamer
>> directly write to filesystem on top of 16 SATA disks (with raid5 and
>> vg)?
>>
>> 2012/4/6 daobang wang <wangdb1981@gmail.com>:
>>> There is another issue, i updated xfsprog from 2.10.1 to 3.1.5, and
>>> found i could not make the xfs filesystem when the logical volume size
>>> large than 8TB with command mkfs.xfs -f -i size=512
>>> /dev/vg+vg00+20120406101850/lv+nxx+lv0000, this command seems hang, it
>>> did not return for a long time, is there any parameter i should
>>> adjust?
>>>
>>> Thank you very much
>>>
>>> Best Regards,
>>> Daobang Wang.
>>>
>>> On 4/6/12, daobang wang <wangdb1981@gmail.com> wrote:
>>>> Hi All,
>>>>
>>>>     I have found the solution, i updated the xfsprogs from 2.10.1 to
>>>> 3.1.5, and could repair it, Thanks.
>>>>
>>>> Best Wishes,
>>>> Daobang Wang.
>>>>
>>>> On 4/6/12, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>>>>> On 4/5/2012 1:48 AM, daobang wang wrote:
>>>>>> Hi stan,
>>>>>>
>>>>>>      I duplicated the input/output error issue, about the detail
>>>>>> operations and logs, please see the attachments, Is there any way to
>>>>>> fix this? thanks!
>>>>>>
>>>>>> Best Regards,
>>>>>> Daobang Wang.
>>>>>
>>>>>
>>>>> These fiilesystem issues have nothing to do with linux-raid.  I'm
>>>>> copying the XFS mailing list which is where this discussion should be
>>>>> taking place from this point forward.  Please reply-to-all, and paste
>>>>> the output you previously attached, but inline this time.
>>>>>
>>>>> Also, since the XFS folks are unfamiliar with what you're doing up to
>>>>> this point, please provide a basic description of your
>>>>> hardware/storage
>>>>> setup, kernel version, mdraid configuration, xfs_info output as well
>>>>> as
>>>>> your fstab XFS mount options, and a description of your workload.
>>>>>
>>>>> My best guess at this point as to the du and ls errors is that your
>>>>> application is not behaving properly, or you're still running with XFS
>>>>> barriers disabled, which, I say _loudly_ for the 2nd time, you should
>>>>> NOT do in the absence of BBWC, which you stated you do not have.
>>>>>
>>>>> --
>>>>> Stan
>>>>>
>>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RAID5 created by 8 disks works with xfs
  2012-04-06  6:49                                   ` daobang wang
@ 2012-04-06  8:18                                     ` Stan Hoeppner
  2012-04-06  8:45                                       ` daobang wang
  0 siblings, 1 reply; 10+ messages in thread
From: Stan Hoeppner @ 2012-04-06  8:18 UTC (permalink / raw)
  To: daobang wang
  Cc: Marcus Sorensen, linux-raid, Mathias Burén, Jack Wang,
	xfs@oss.sgi.com

On 4/6/2012 1:49 AM, daobang wang wrote:
> I did more tests, and found the mkfs.xfs did not hang.
> 
> I tried to make the xfs on 7TB logical volume, both 2.10.1 and 3.1.5
> mkfs.xfs -f dev will work well, less than one minute.
> 
> But when i tried to make xfs on 8TB logical volume, 2.10.1 works well,
> less than 1 minute, but 3.1.5 needs 5 minutes.

Once you provide the information I requested I'd be glad to help you
further.  Without that information I'm wasting my time.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RAID5 created by 8 disks works with xfs
  2012-04-06  8:18                                     ` Stan Hoeppner
@ 2012-04-06  8:45                                       ` daobang wang
  2012-04-06 11:12                                         ` Stan Hoeppner
  0 siblings, 1 reply; 10+ messages in thread
From: daobang wang @ 2012-04-06  8:45 UTC (permalink / raw)
  To: stan
  Cc: Marcus Sorensen, linux-raid, Mathias Burén, Jack Wang,
	xfs@oss.sgi.com

Hi stan,

    Thank you for your reply, the envrionment is the same that i
mentioned before,  the user allows the cache data to miss, so i did
not disable the barriers, they just demands that the filesystem should
be repaired after system restarted. We did not deal with the fstab
now, it is the next step work.

    And about the mkfs.xfs slow issue, we found the logsize is too
large, it caused making the xfs filesystem slowly.

    Thanks again.

Have a good day.
Daobang Wang.

On 4/6/12, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> On 4/6/2012 1:49 AM, daobang wang wrote:
>> I did more tests, and found the mkfs.xfs did not hang.
>>
>> I tried to make the xfs on 7TB logical volume, both 2.10.1 and 3.1.5
>> mkfs.xfs -f dev will work well, less than one minute.
>>
>> But when i tried to make xfs on 8TB logical volume, 2.10.1 works well,
>> less than 1 minute, but 3.1.5 needs 5 minutes.
>
> Once you provide the information I requested I'd be glad to help you
> further.  Without that information I'm wasting my time.
>
> --
> Stan
>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RAID5 created by 8 disks works with xfs
  2012-04-06  8:45                                       ` daobang wang
@ 2012-04-06 11:12                                         ` Stan Hoeppner
  2012-04-18  2:23                                           ` daobang wang
  0 siblings, 1 reply; 10+ messages in thread
From: Stan Hoeppner @ 2012-04-06 11:12 UTC (permalink / raw)
  To: daobang wang
  Cc: Marcus Sorensen, linux-raid, Mathias Burén, Jack Wang,
	xfs@oss.sgi.com

On 4/6/2012 3:45 AM, daobang wang wrote:
> Hi stan,
> 
>     Thank you for your reply, the envrionment is the same that i
> mentioned before,  [...]

There's seems to be some kind of communication failure here.  I'll make
this really simple.  Please fill in the blanks.

$ uname -a

$ xfs_info [device]

$ cat /etc/fstab|grep xfs

$ mdadm --detail /dev/mdX


Thanks.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: RAID5 created by 8 disks works with xfs
  2012-04-06 11:12                                         ` Stan Hoeppner
@ 2012-04-18  2:23                                           ` daobang wang
  0 siblings, 0 replies; 10+ messages in thread
From: daobang wang @ 2012-04-18  2:23 UTC (permalink / raw)
  To: stan
  Cc: Marcus Sorensen, linux-raid, Mathias Burén, Jack Wang,
	xfs@oss.sgi.com

Hi stan,

    So sorry to reply this so late, the original envrionment is not
exist now, i build a same one, please see the detail output.

1. uname -a
Linux nsspioneer 2.6.36.4-v64 #1 SMP Fri Apr 13 10:16:12 CST 2012
x86_64 x86_64 x86_64 GNU/Linux

2.xfs_info /dev/vg+vg00+20120418101725/lv+nxx+lv0000
meta-data=/dev/vg+vg00+20120418101725/lv+nxx+lv0000 isize=256
agcount=13, agsize=268435440 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=3418374144, imaxpct=5
         =                       sunit=16     swidth=128 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

3. the command "cat /etc/fstab | grep xfs" has no output, we did not
save it in fstab

4. mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Wed Apr 18 10:02:26 2012
     Raid Level : raid5
     Array Size : 13673525376 (13040.09 GiB 14001.69 GB)
  Used Dev Size : 1953360768 (1862.87 GiB 2000.24 GB)
   Raid Devices : 8
  Total Devices : 8
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Wed Apr 18 10:12:22 2012
          State : online
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : md+LD00+120418100226
           UUID : bbddf00e:779ca156:bb4f7d10:285e4fcf
         Events : 195

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde
       4       8       80        4      active sync   /dev/sdf
       5       8       96        5      active sync   /dev/sdg
       6       8      112        6      active sync   /dev/sdh
       8       8      128        7      active sync   /dev/sdi

On 4/6/12, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> On 4/6/2012 3:45 AM, daobang wang wrote:
>> Hi stan,
>>
>>     Thank you for your reply, the envrionment is the same that i
>> mentioned before,  [...]
>
> There's seems to be some kind of communication failure here.  I'll make
> this really simple.  Please fill in the blanks.
>
> $ uname -a
>
> $ xfs_info [device]
>
> $ cat /etc/fstab|grep xfs
>
> $ mdadm --detail /dev/mdX
>
>
> Thanks.
>
> --
> Stan
>
>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-04-18  2:23 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CACwgYDPQBNa7z7WcyUjV2Rx8Qf47r7eaRCowNKxjLBz+fYcfvQ@mail.gmail.com>
     [not found] ` <CADNH=7H_rgiY4fkB0SHo0yPhSBib7Gq-E_fu18yuJx=enn-eGg@mail.gmail.com>
     [not found]   ` <4F776492.4070600@hardwarefreak.com>
     [not found]     ` <CACwgYDOSrEYUps2VLpFSZ-irH7Mn_-BWrOYYDgXS=ULrmVuEPw@mail.gmail.com>
     [not found]       ` <4F77D0B2.8000809@hardwarefreak.com>
     [not found]         ` <CACwgYDMx6nQF-OwYj-BA+sZivUK=kmv2tPukgf5JGwA1vMTGrA@mail.gmail.com>
     [not found]           ` <4F77EA55.6090004@hardwarefreak.com>
     [not found]             ` <CACwgYDMVHT5DFjJztX9JvsVQJ+uOjPrfcs4+0aGXotDvf6tymQ@mail.gmail.com>
     [not found]               ` <CACwgYDMXF9WEgzW9vL_0=GdRc_t+67WAkRTKNoKOMEMCvjujVw@mail.gmail.com>
     [not found]                 ` <CALFpzo5aKM2y_stDR14PNYMHEcq5CEptuFiXrvcCpXTzSQmAxw@mail.gmail.com>
     [not found]                   ` <4F792154.5020006@hardwarefreak.com>
     [not found]                     ` <CACwgYDPsZSEhuqNP8YgAksEq9BB6YOS1Q8jGx2J7DCrrOh_JQw@mail.gmail.com>
     [not found]                       ` <CACwgYDOtCoVF-p+KKqPYxHhA4vWF78Ueecx9hcVWLoyxFWzV9Q@mail.gmail.com>
2012-04-05 21:01                         ` RAID5 created by 8 disks works with xfs Stan Hoeppner
2012-04-06  0:25                           ` daobang wang
2012-04-06  2:33                             ` daobang wang
2012-04-06  6:00                               ` Jack Wang
2012-04-06  6:45                                 ` daobang wang
2012-04-06  6:49                                   ` daobang wang
2012-04-06  8:18                                     ` Stan Hoeppner
2012-04-06  8:45                                       ` daobang wang
2012-04-06 11:12                                         ` Stan Hoeppner
2012-04-18  2:23                                           ` daobang wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox