linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* (unknown), 
@ 2010-03-08  1:37 Leslie Rhorer
  2010-03-08  1:53 ` Neil Brown
  0 siblings, 1 reply; 63+ messages in thread
From: Leslie Rhorer @ 2010-03-08  1:37 UTC (permalink / raw)
  To: linux-raid

I am running mdadm 2.6.7.2-1, and 2.6.7.2-3 is available under my distro.
Do either of these versions support reshaping an array from RAID5 to RAID6?
Does any later version?


^ permalink raw reply	[flat|nested] 63+ messages in thread
* Re:
@ 2020-08-12 10:54 Alex Anadi
  0 siblings, 0 replies; 63+ messages in thread
From: Alex Anadi @ 2020-08-12 10:54 UTC (permalink / raw)


Attention: Sir/Madam,

Compliments of the season.

I am Mr Alex Anadi a senior staff of Computer Telex Dept of central
bank of Nigeria.

I decided to contact you because of the prevailing security report
reaching my office and the intense nature of polity in Nigeria.

This is to inform you about the recent plan of federal government of
Nigeria to send your fund to you via diplomatic immunity CASH DELIVERY
SYSTEM valued at $10.6 Million United states dollars only, contact me
for further details.

Regards,
Mr Alex Anadi.

^ permalink raw reply	[flat|nested] 63+ messages in thread
* Re;
@ 2020-06-24 13:54 test02
  0 siblings, 0 replies; 63+ messages in thread
From: test02 @ 2020-06-24 13:54 UTC (permalink / raw)
  To: Recipients

Congratulations!!!


As part of my humanitarian individual support during this hard times of fighting the Corona Virus (Convid-19); your email account was selected for a Donation of $1,000,000.00 USD for charity and community medical support in your area. 
Please contact us for more information on charles_jackson001@yahoo.com.com


Send Your Response To: charles_jackson001@yahoo.com


Best Regards,

Charles .W. Jackson Jr

-- 
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 63+ messages in thread
* Re:
@ 2017-11-13 14:55 Amos Kalonzo
  0 siblings, 0 replies; 63+ messages in thread
From: Amos Kalonzo @ 2017-11-13 14:55 UTC (permalink / raw)


Attn:

I am wondering why You haven't respond to my email for some days now.
reference to my client's contract balance payment of (11.7M,USD)
Kindly get back to me for more details.

Best Regards

Amos Kalonzo

^ permalink raw reply	[flat|nested] 63+ messages in thread
* Re:
@ 2017-05-03  6:23 H.A
  0 siblings, 0 replies; 63+ messages in thread
From: H.A @ 2017-05-03  6:23 UTC (permalink / raw)
  To: Recipients

With profound love in my heart, I Kindly Oblige your interest to very important proposal.. It is Truly Divine and require your utmost attention..........

S hlubokou láskou v mém srdci, Laskave jsem prinutit svuj zájem k návrhu .. Je velmi duležité, skutecne Divine a vyžadují vaši nejvyšší pozornost.

  Kontaktujte me prímo pres: helenaroberts99@gmail.com pro úplné podrobnosti.complete.


HELINA .A ROBERTS

---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown), 
@ 2017-04-13 15:58 Scott Ellentuch
       [not found] ` <CAK2H+efb3iKA5P3yd7uRqJomci6ENvrB1JRBBmtQEpEvyPMe7w@mail.gmail.com>
  0 siblings, 1 reply; 63+ messages in thread
From: Scott Ellentuch @ 2017-04-13 15:58 UTC (permalink / raw)
  To: linux-raid

for disk in a b c d g h i j k l m n
do

  disklist="${disklist} /dev/sd${disk}1"

done

mdadm --create --verbose /dev/md2 --level=5 --raid=devices=12  ${disklist}

But its telling me :

mdadm: invalid number of raid devices: devices=12


I can't find any definition of a limit anywhere.

Thank you, Tuc

^ permalink raw reply	[flat|nested] 63+ messages in thread
* RE:
@ 2017-02-23 15:09 Qin's Yanjun
  0 siblings, 0 replies; 63+ messages in thread
From: Qin's Yanjun @ 2017-02-23 15:09 UTC (permalink / raw)



How are you today and your family? I require your attention and honest
co-operation about some issues which i will really want to discuss with you
which.  Looking forward to read from you soon.  

Qin's


______________________________

Sky Silk, http://aknet.kz


^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown), 
@ 2016-11-06 21:00 Dennis Dataopslag
  2016-11-07 16:50 ` Wols Lists
  2016-11-17 20:33 ` Re: Dennis Dataopslag
  0 siblings, 2 replies; 63+ messages in thread
From: Dennis Dataopslag @ 2016-11-06 21:00 UTC (permalink / raw)
  To: linux-raid

Help wanted very much!

My setup:
Thecus N5550 NAS with 5 1TB drives installed.

MD0: RAID 5 config of 4 drives (SD[ABCD]2)
MD10: RAID 1 config of all 5 drives (SD..1), system generated array
MD50: RAID 1 config of 4 drives (SD[ABCD]3), system generated array

1 drive (SDE) set as global hot spare.


What happened:
This weekend I thought it might be a good idea to do a SMART test for
the drives in my NAS.
I started the test on 1 drive and after it ran for a while I started
the other ones.
While the test was running drive 3 failed. I got a message the RAID
was degraded and started rebuilding. (My assumption is that at this
moment the global hot spare will automatically be added to the array)

I stopped the SMART tests of all drives at this moment since it seemed
logical to me the SMART test (or the outcomes) made the drive fail.
In stopping the tests, drive 1 also failed!!
I let it for a little but the admin interface kept telling me it was
degraded, did not seem to take any actions to start rebuilding.
At this point I started googling and found I should remove and reseat
the drives. This is also what I did but nothing seemd to happen.
The turned up as new drives in the admin interface and I re-added them
to the array, they were added as spares.
Even after adding them the array didn't start rebuilding.
I checked stat in mdadm and it told me clean FAILED opposed to the
degraded in the admin interface.

I rebooted the NAS since it didn't seem to be doing anything I might interrupt.
after rebooting it seemed as if the entire array had disappeared!!
I started looking for options in MDADM and tried every "normal"option
to rebuild the array (--assemble --scan for example)
Unfortunately I cannot produce a complete list since I cannot find how
to get it from the logging.

Finally I mdadm --create a new array with the original 4 drives with
all the right settings. (Got them from 1 of the original volumes)
The creation worked but after creation it doesn't seem to have a valid
partition table. This is the point where I realized I probably fucked
it up big-time and should call in the help squad!!!
What I think went wrong is that I re-created an array with the
original 4 drives from before the first failure but the hot-spare was
already added?

The most important data from the array is saved in an offline backup
luckily but I would very much like it if there is any way I could
restore the data from the array.

Is there any way I could get it back online?

^ permalink raw reply	[flat|nested] 63+ messages in thread
* RE:
@ 2015-09-30 12:06 Apple-Free-Lotto
  0 siblings, 0 replies; 63+ messages in thread
From: Apple-Free-Lotto @ 2015-09-30 12:06 UTC (permalink / raw)
  To: Recipients

You have won 760,889:00 GBP in Apple Free Lotto, without the sale of any tickets! Send. Full Name:. Mobile Number and Alternative Email Address. for details and instructions please contact Mr. Gilly Mann: Email: app.freeloto@foxmail.com

^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown), 
@ 2014-11-26 18:38 Travis Williams
  2014-11-26 20:49 ` NeilBrown
  0 siblings, 1 reply; 63+ messages in thread
From: Travis Williams @ 2014-11-26 18:38 UTC (permalink / raw)
  To: linux-raid

Hello all,

I feel as though I must be missing something that I have had no luck
finding all morning.

When setting up arrays with spares in a spare-group, I'm having no
luck finding a way to get that information from mdadm or mdstat. This
becomes an issue when trying to write out configs and the like, or
simply trying to get a feel for how arrays are setup on a system.

Many tutorials/documentation/etc etc list using `mdadm --scan --detail
>> /etc/mdadm/mdadm.conf` as a way to write out the running config for
initialization at reboot.  There is never any of the spare-group
information listed in that output. Is there another way to see what
spare-group is included in a currently running array?

It also isn't listed in `mdadm --scan`, or by `cat /proc/mdstat`

I've primarily noticed this with Debian 7, with mdadm v3.2.5 - 18th
May 2012. kernel 3.2.0-4.

When I modify the mdadm.conf myself and add the 'spare-group' setting
myself, the arrays work as expected, but I haven't been able to find a
way to KNOW that they are currently running that way without failing
drives out to see. This nearly burned me after a restart in one
instance that I caught out of dumb luck before anything of value was
lost.

Thanks,

-Travis

^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown), 
@ 2012-12-25  0:12 bobzer
  2012-12-25  5:38 ` Phil Turmel
  0 siblings, 1 reply; 63+ messages in thread
From: bobzer @ 2012-12-25  0:12 UTC (permalink / raw)
  To: linux-raid

Hi everyone,

i don't understand what happend (like i did nothing)
the file look like there are here, i can browse, but can't read or copy

i'm sure the problem is obvious :

mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Mar  4 22:49:14 2012
     Raid Level : raid5
     Array Size : 3907021568 (3726.03 GiB 4000.79 GB)
  Used Dev Size : 1953510784 (1863.01 GiB 2000.40 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Mon Dec 24 18:51:53 2012
          State : clean, FAILED
 Active Devices : 1
Working Devices : 1
 Failed Devices : 2
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 128K

           Name : debian:0  (local to host debian)
           UUID : bf3c605b:9699aa55:d45119a2:7ba58d56
         Events : 409

    Number   Major   Minor   RaidDevice State
       3       8       17        0      active sync   /dev/sdb1
       1       0        0        1      removed
       2       0        0        2      removed

       1       8       33        -      faulty spare   /dev/sdc1
       2       8       49        -      faulty spare   /dev/sdd1

ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda5  /dev/sda6  /dev/sda7
/dev/sdb  /dev/sdb1  /dev/sdc  /dev/sdc1  /dev/sdd  /dev/sdd1

i thought about :
mdadm --stop /dev/md0
mdadm --assemble --force /dev/md0 /dev/sd[bcd]1

but i don't know what i should do :-(
thank you for your help

merry christmas

mathieu

^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown), 
@ 2012-12-17  0:59 Maik Purwin
  2012-12-17  3:55 ` Phil Turmel
  0 siblings, 1 reply; 63+ messages in thread
From: Maik Purwin @ 2012-12-17  0:59 UTC (permalink / raw)
  To: linux-raid

Hello,
i make a misstake and disconnected 2 of my 6 disk in a software raid 5 on
debian squeeze. After that the two disks reported as missing and spare so
i have 4 on 4 in raid5.

after that i tried to add and re-add but without no efforts. Then i do this:

mdadm --assemble /dev/md2 --scan --force
mdadm: failed to add /dev/sdd4 to /dev/md2: Device or resource busy
mdadm: /dev/md2 assembled from 4 drives and 1 spare - not enough to start
the array.

and now i didnt know to go on. i have fear to setup the raid new. I hope
you can help.

Many thx.


^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown), 
@ 2011-09-26  4:23 Kenn
  2011-09-26  4:52 ` NeilBrown
  0 siblings, 1 reply; 63+ messages in thread
From: Kenn @ 2011-09-26  4:23 UTC (permalink / raw)
  To: linux-raid; +Cc: neilb

I have a raid5 array that had a drive drop out, and resilvered the wrong
drive when I put it back in, corrupting and destroying the raid.  I
stopped the array at less than 1% resilvering and I'm in the process of
making a dd-copy of the drive to recover the files.

(1) Is there anything diagnostic I can contribute to add more
wrong-drive-resilvering protection to mdadm?  I have the command history
showing everything I did, I have the five drives available for reading
sectors, I haven't touched anything yet.

(2) Can I suggest improvements into resilvering?  Can I contribute code to
implement them?  Such as resilver from the end of the drive back to the
front, so if you notice the wrong drive resilvering, you can stop and not
lose the MBR and the directory format structure that's stored in the first
few sectors?  I'd also like to take a look at adding a raid mode where
there's checksum in every stripe block so the system can detect corrupted
disks and not resilver.  I'd also like to add a raid option where a
resilvering need will be reported by email and needs to be started
manually.  All to prevent what happened to me from happening again.

Thanks for your time.

Kenn Frank

P.S.  Setup:

# uname -a
Linux teresa 2.6.26-2-686 #1 SMP Sat Jun 11 14:54:10 UTC 2011 i686 GNU/Linux

# mdadm --version
mdadm - v2.6.7.2 - 14th November 2008

# mdadm --detail /dev/md3
/dev/md3:
        Version : 00.90
  Creation Time : Thu Sep 22 16:23:50 2011
     Raid Level : raid5
     Array Size : 2930287616 (2794.54 GiB 3000.61 GB)
  Used Dev Size : 732571904 (698.64 GiB 750.15 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 3
    Persistence : Superblock is persistent

    Update Time : Thu Sep 22 20:19:09 2011
          State : clean, degraded
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : ed1e6357:74e32684:47f7b12e:9c2b2218 (local to host teresa)
         Events : 0.6

    Number   Major   Minor   RaidDevice State
       0      33        1        0      active sync   /dev/hde1
       1      56        1        1      active sync   /dev/hdi1
       2       0        0        2      removed
       3      57        1        3      active sync   /dev/hdk1
       4      34        1        4      active sync   /dev/hdg1




^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown)
@ 2011-06-18 20:39 Dragon
  2011-06-19 18:40 ` Phil Turmel
  0 siblings, 1 reply; 63+ messages in thread
From: Dragon @ 2011-06-18 20:39 UTC (permalink / raw)
  To: philip; +Cc: linux-raid

Monitor your background reshape with "cat /proc/mdstat".

When the reshape is complete, the extra disk will be marked "spare".

Then you can use "mdadm --remove".
-->after a view days the reshape was done and i take the disk out of the raid -> many thx for that

> at this point i think i take the disk out of the raid, because i need the space of
the disk.

Understood, but you are living on the edge.  You have no backup, and only one drive
of redundancy.  If one of your drives does fail, the odds of losing the whole array
while replacing it is significant.  Your Samsung drives claim a non-recoverable read
error rate of 1 per 1x10^15 bits.  Your eleven data disks contain 1.32x10^14 bits,
all of which must be read during rebuild.  That means a _13%_ chance of total
failure while replacing a failed drive.

I hope your 16T of data is not terribly important to you, or is otherwise replaceable.
--> nice calculation, where do you have the data from?
--> most of it is important, i will look for a better solution

> I need another advise of you. While the computer is actualy build with 13 disk and
i will become more data in the next month and the limit of power supply
connecotors is reached i am looking forward to another solution. one possibility
is to build up a better computer with more sata and sas connectors and add further
raid-controller-cards. an other idea is to build a kind of cluster or dfs with two
and later 3,4... computer. i read something about gluster.org. do you have a tip
for me or experience in this?

Unfortunately, no.  Although I skirt the edges in my engineering work, I'm primarily
an end-user.  Both personal and work projects have relatively modest needs.  From
the engineering side, I do recommend you spend extra on power supplies & UPS.

Phil
--> and than, ext4 max size is actually 16TB, what should i do?
--> for an end-user you have many knowledge about swraid ;)
sunny
-- 
NEU: FreePhone - kostenlos mobil telefonieren!			
Jetzt informieren: http://www.gmx.net/de/go/freephone

^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown)
@ 2011-06-10 20:26 Dragon
  2011-06-11  2:06 ` Phil Turmel
  0 siblings, 1 reply; 63+ messages in thread
From: Dragon @ 2011-06-10 20:26 UTC (permalink / raw)
  To: philip; +Cc: linux-raid

"No, it must be "Used Device Size" * 11 = 16116523456.  Try it without the 'k'."
-> was better:
mdadm /dev/md0 --grow --array-size=16116523456
mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Fri Jun 10 14:19:24 2011
     Raid Level : raid5
     Array Size : 16116523456 (15369.91 GiB 16503.32 GB)
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
   Raid Devices : 13
  Total Devices : 13
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Jun 10 16:49:37 2011
          State : clean
 Active Devices : 13
Working Devices : 13
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 8c4d8438:42aa49f9:a6d866f6:b6ea6b93 (local to host nassrv01)
         Events : 0.2

    Number   Major   Minor   RaidDevice State
       0       8      160        0      active sync   /dev/sdk
       1       8      208        1      active sync   /dev/sdn
       2       8      176        2      active sync   /dev/sdl
       3       8      192        3      active sync   /dev/sdm
       4       8        0        4      active sync   /dev/sda
       5       8       16        5      active sync   /dev/sdb
       6       8       64        6      active sync   /dev/sde
       7       8       48        7      active sync   /dev/sdd
       8       8       80        8      active sync   /dev/sdf
       9       8       96        9      active sync   /dev/sdg
      10       8      112       10      active sync   /dev/sdh
      11       8      128       11      active sync   /dev/sdi
      12       8      144       12      active sync   /dev/sdj

->fsck -n /dev/md0, was ok
->now:mdadm /dev/md0 --grow -n 12 --backup-file=/reshape.bak
->and after that, how become the disk out of the raid?
--

at this point i think i take the disk out of the raid, because i need the space of the disk.

I need another advise of you. While the computer is actualy build with 13 disk and i will become more data in the next month and the limit of power supply connecotors is reached i am looking forward to another solution. one possibility is to build up a better computer with more sata and sas connectors and add further raid-controller-cards. an other idea is to build a kind of cluster or dfs with two and later 3,4... computer. i read something about gluster.org. do you have a tip for me or experience in this?
-- 
NEU: FreePhone - kostenlos mobil telefonieren!			
Jetzt informieren: http://www.gmx.net/de/go/freephone


-- 
NEU: FreePhone - kostenlos mobil telefonieren!			
Jetzt informieren: http://www.gmx.net/de/go/freephone

^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown)
@ 2011-06-09 12:16 Dragon
  2011-06-09 13:39 ` Phil Turmel
  0 siblings, 1 reply; 63+ messages in thread
From: Dragon @ 2011-06-09 12:16 UTC (permalink / raw)
  To: philip; +Cc: linux-raid

Yes if all things get back to normal i will change to raid6. that was my idea for the future too.
here the result of the script:

./lsdrv
**Warning** The following utility(ies) failed to execute:
  pvs
  lvs
Some information may be missing.

PCI [pata_atiixp] 00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller
 ââscsi 0:0:0:0 ATA SAMSUNG HD154UI {S1XWJ1WZ401747}
 â  ââsda: [8:0] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 â     ââmd0: [9:0] Empty/Unknown 0.00k
 ââscsi 0:0:1:0 ATA SAMSUNG HD154UI {S1XWJ1WZ405098}
 â  ââsdb: [8:16] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 1:0:0:0 ATA SAMSUNG SV2044D {0244J1BN626842}
    ââsdc: [8:32] Partitioned (dos) 19.01g
       ââsdc1: [8:33] (ext3) 18.17g {6858fc38-9fee-4ab5-8135-029f305b9198}
       â  ââMounted as /dev/disk/by-uuid/6858fc38-9fee-4ab5-8135-029f305b9198 @ /
       ââsdc2: [8:34] Partitioned (dos) 1.00k
       ââsdc5: [8:37] (swap) 854.99m {f67c7f23-e5ac-4c05-992c-a9a494687026}
PCI [sata_mv] 02:00.0 SCSI storage controller: Marvell Technology Group Ltd. 88SX7042 PCI-e 4-port SATA-II (rev 02)
 ââscsi 2:0:0:0 ATA SAMSUNG HD154UI {S1XWJD2Z907626}
 â  ââsdd: [8:48] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 4:0:0:0 ATA SAMSUNG HD154UI {S1XWJ90ZA03442}
 â  ââsde: [8:64] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 6:0:0:0 ATA SAMSUNG HD154UI {S1XWJ9AB200390}
 â  ââsdf: [8:80] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 8:0:0:0 ATA SAMSUNG HD154UI {61833B761A63RP}
    ââsdg: [8:96] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
PCI [sata_promise] 04:02.0 Mass storage controller: Promise Technology, Inc. PDC40718 (SATA 300 TX4) (rev 02)
 ââscsi 3:0:0:0 ATA SAMSUNG HD154UI {S1XWJD5B201174}
 â  ââsdh: [8:112] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 5:0:0:0 ATA SAMSUNG HD154UI {S1XWJ9CB201815}
 â  ââsdi: [8:128] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 7:x:x:x [Empty]
 ââscsi 9:0:0:0 ATA SAMSUNG HD154UI {A6311B761A3XPB}
    ââsdj: [8:144] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
PCI [ahci] 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [IDE mode]
 ââscsi 10:0:0:0 ATA SAMSUNG HD154UI {S1XWJ1KS915803}
 â  ââsdk: [8:160] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 11:0:0:0 ATA SAMSUNG HD154UI {S1XWJ1KS915802}
 â  ââsdl: [8:176] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 12:0:0:0 ATA SAMSUNG HD154UI {S1XWJ1KSC08024}
 â  ââsdm: [8:192] MD raid5 (none/13) 1.36t md0 inactive spare {975d6eb2-285e-ed11-021d-f236c2d05073}
 ââscsi 13:0:0:0 ATA SAMSUNG HD154UI {S1XWJ1KS915804}
    ââsdn: [8:208] MD raid5 (13) 1.36t inactive {975d6eb2-285e-ed11-021d-f236c2d05073}

-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown)
@ 2011-06-09  6:50 Dragon
  2011-06-09 12:01 ` Phil Turmel
  0 siblings, 1 reply; 63+ messages in thread
From: Dragon @ 2011-06-09  6:50 UTC (permalink / raw)
  To: philip; +Cc: linux-raid

Hi Phil,
i know that there is something odd with the raid, thats why i need help.
No i didnt scamble the report. thats what the system output. Sorry for confusing with sdo, this is my usb disk and doesnt belong to the raid. because of the size i didnt have any backup ;(

I do not let the system run 24/7 and as i started at in the morning the sequence has changed.
 fdisk -l |grep sd
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdc: 20.4 GB, 20409532416 bytes
/dev/sdc1   *           1        2372    19053058+  83  Linux
/dev/sdc2            2373        2481      875542+   5  Extended
/dev/sdc5            2373        2481      875511   82  Linux swap / Solaris
Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes
Disk /dev/sde: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdg: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdf: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdh: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdi: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdj: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdk: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdl: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdm: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdn: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
Yesterday was the system on disk sdk. now its on sdc?! the system is now and up to the evening online.
here the actual data of the drives again:
mdadm -E /dev/sda
/dev/sda:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee4232 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     4       8      176        4      active sync   /dev/sdl

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee4244 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     5       8      192        5      active sync   /dev/sdm

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
 mdadm -E /dev/sdd
/dev/sdd:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee418e - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    13       8        0       13      spare   /dev/sda

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sde
/dev/sde:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee4196 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     6       8       16        6      active sync   /dev/sdb

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdf
/dev/sdf:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41aa - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     8       8       32        8      active sync   /dev/sdc

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdg
/dev/sdg:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41bc - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     9       8       48        9      active sync   /dev/sdd

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdh
/dev/sdh:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41ce - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    10       8       64       10      active sync   /dev/sde

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdi
/dev/sdi:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41e0 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    11       8       80       11      active sync   /dev/sdf

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdj
/dev/sdj:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41f2 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    12       8       96       12      active sync   /dev/sdg

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdk
/dev/sdk:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41ea - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8      112        0      active sync   /dev/sdh

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdl
/dev/sdl:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee41fe - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8      128        2      active sync   /dev/sdi

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdm
/dev/sdm:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 23:47:53 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee4210 - correct
         Events : 156864

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8      144        3      active sync   /dev/sdj

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8        0       13      spare   /dev/sda
mdadm -E /dev/sdn
/dev/sdn:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 975d6eb2:285eed11:021df236:c2d05073
  Creation Time : Tue Oct 13 23:26:17 2009
     Raid Level : raid5
  Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
     Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
   Raid Devices : 13
  Total Devices : 12
Preferred Minor : 0

    Update Time : Fri Jun  3 22:49:22 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 1dee3313 - correct
         Events : 156606

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    13       8      160       13      spare   /dev/sdk

   0     0       8      112        0      active sync   /dev/sdh
   1     1       0        0        1      faulty removed
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8      144        3      active sync   /dev/sdj
   4     4       8      176        4      active sync   /dev/sdl
   5     5       8      192        5      active sync   /dev/sdm
   6     6       8       16        6      active sync   /dev/sdb
   7     7       0        0        7      faulty removed
   8     8       8       32        8      active sync   /dev/sdc
   9     9       8       48        9      active sync   /dev/sdd
  10    10       8       64       10      active sync   /dev/sde
  11    11       8       80       11      active sync   /dev/sdf
  12    12       8       96       12      active sync   /dev/sdg
  13    13       8      160       13      spare   /dev/sdk

as far as i can see, now there is no error with a missing superblock of one disk.

how can i download lsdrv with "wget"? Yes the way backwards by shrinking lead to the actual problem.
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

^ permalink raw reply	[flat|nested] 63+ messages in thread
* Re:....
@ 2011-04-10  1:20 Young Chang
  0 siblings, 0 replies; 63+ messages in thread
From: Young Chang @ 2011-04-10  1:20 UTC (permalink / raw)


May I ask if you would be eligible to pursue a Business Proposal of $19.7m with me if you dont mind? Let me know if you are interested?

^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown), 
@ 2010-11-13  6:01 Mike Viau
  2010-11-13 19:36 ` Neil Brown
  0 siblings, 1 reply; 63+ messages in thread
From: Mike Viau @ 2010-11-13  6:01 UTC (permalink / raw)
  To: linux-raid; +Cc: debian-user


Hello,

I am trying to re-setup my fake-raid (RAID1) volume with LVM2 like setup previously. I had been using dmraid on a Lenny installation which gave me (from memory) a block device like /dev/mapper/isw_xxxxxxxxxxx_ and also a /dev/One1TB, but have discovered that the mdadm has replaced the older and believed to be obsolete dmraid for multiple disk/raid support.

Automatically the fake-raid LVM physical volume does not seem to be set up. I believe my data is safe as I can insert a knoppix live-cd in the system and mount the fake-raid volume (and browse the files). I am planning on perhaps purchasing another at least 1TB drive to backup the data before trying to much fancy stuff with mdadm in fear of loosing the data.

A few commands that might shed more light on the situation:


pvdisplay (showing the /dev/md/[device] not recognized yet by LVM2, note sdc another single drive with LVM)

  --- Physical volume ---
  PV Name               /dev/sdc7
  VG Name               XENSTORE-VG
  PV Size               46.56 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              11920
  Free PE               0
  Allocated PE          11920
  PV UUID               wRa8xM-lcGZ-GwLX-F6bA-YiCj-c9e1-eMpPdL


cat /proc/mdstat (showing what mdadm shows/discovers)

Personalities :
md127 : inactive sda[1](S) sdb[0](S)
      4514 blocks super external:imsm

unused devices: 


ls -l /dev/md/imsm0 (showing contents of /dev/md/* [currently only one file/link ])

lrwxrwxrwx 1 root root 8 Nov  7 08:07 /dev/md/imsm0 -> ../md127


ls -l /dev/md127 (showing the block device)

brw-rw---- 1 root disk 9, 127 Nov  7 08:07 /dev/md127




It looks like I can not even access the md device the system created on boot. 

Does anyone have a guide or tips to migrating from the older dmraid to mdadm for fake-raid?


fdisk -uc /dev/md127  (showing the block device is inaccessible)

Unable to read /dev/md127


dmesg (pieces of dmesg/booting)

[    4.214092] device-mapper: uevent: version 1.0.3
[    4.214495] device-mapper: ioctl: 4.15.0-ioctl (2009-04-01) initialised: dm-devel@redhat.com
[    5.509386] udev[446]: starting version 163
[    7.181418] md: md127 stopped.
[    7.183088] md: bind<sdb>
[    7.183179] md: bind<sda>



update-initramfs -u (Perhaps the most interesting error of them all, I can confirm this occurs with a few different kernels)

update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory


Revised my information, inital thread on Debian-users thread at:
http://lists.debian.org/debian-user/2010/11/msg01015.html

Thanks for any ones help :)

-M
 		 	   		  

^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown)
@ 2010-01-06 14:19 Lapohos Tibor
  2010-01-06 20:21 ` Michael Evans
  0 siblings, 1 reply; 63+ messages in thread
From: Lapohos Tibor @ 2010-01-06 14:19 UTC (permalink / raw)
  To: linux-raid

Hello, 

I successfully set up an Intel Matrix Raid device with a RAID1 and a RAID0 volume, each having a couple of partitions, but then I could not install GRUB2 on the RAID1 volume, which I wanted to use to boot from and mount as root. It turned out that the "IMSM" metadata is not supported in GRUB2 (v1.97.1) just yet, so I had to turn away from my original plan. 

To "imitate" the setup I originally wanded, I turned both of my drives into AHCI controlled devices in the BIOS (instead of RAID), and I partitioned them to obtain /dev/sda[12] and /dev/sdb[12]. 

Then I used /dev/sd[ab]1 to build a RAID1 set, and /dev/sd[ab]2 to create a RAID0 set using mdadm v 3.0.3: 

> mdadm -C /dev/md0 -v -e 0 -l 1 -n 2 /dev/sda1 /dev/sdb1 
> mdadm -C /dev/md1 -v -e 0 -l 0 -n 2 /dev/sda2 /dev/sdb2 

I set the metadata type to 0.90 because I would like to boot from it and allow the kernel to auto-detect the RAID devices while it's booting, in order to can get away from using an intitrd (I am building my own distribution based on CLFS x86_64 multilib). 

I used cfdisk to partition both of the /dev/md[01] devices, and I obtained /dev/md0p[123] and /dev/md1p[12]. The plan is to use /dev/md0p1 as a RAID1 root partition, and have the system boot from /dev/md0. I formatted /dev/md0p1 as 

> mk2efs -t ext4 -L OS /dev/md0p1 

To this point, things went smoothly. mdadm -D... and mdadm -E... did report back working devices as intended. Then mounted /dev/md0p1 on a directory called /root/os, and I did 

> grub-install --root-directory=/root/os /dev/md0 

or 

> grub-install --root-directory=/root/os "md0" 

and I got a warning and an error message: "Your embedding area is unusually small.  core.img won't fit in it." and "Embedding is not possible, but this is required when the root device is on a RAID array or LVM volume." 

What did I do wrong, and how do I fix it? Thanks ahead, 
Tibor



^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown), 
@ 2009-06-05  0:50 Jack Etherington
  2009-06-05  1:18 ` Roger Heflin
  0 siblings, 1 reply; 63+ messages in thread
From: Jack Etherington @ 2009-06-05  0:50 UTC (permalink / raw)
  To: linux-raid

Hello,
I am not sure whether troubleshooting messages are allowed on the mdadm
mailing list (or it is for development and bugs only) so please point me in
the right direction if this is not the right place.

Before posting here I have tried using the following resources for
information:
>Google
>Distribution IRC channel (Ubuntu)
>Linuxquestions.org

My knowledge of Linux is beginner/moderate.

My setup is:
9x1tb Hard Drives (2xhitachi and 7x Samsung HD103UJ)
Supermicro AOC-SAT2-MV8 8 Port SATA Card
1xMotherboard SATA port
Single RAID5 array created with mdadm, printout of /proc/mdstat:

root@server3:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
[raid10]
md0 : active raid5 sdj1[7] sdc1[0] sda1[8] sdg1[6] sdi1[9](F) sdd1[4]
sde1[3] sdh1[2] sdf1[10](F)
      7814079488 blocks level 5, 64k chunk, algorithm 2 [9/7] [U_UUU_UUU]


A printout of /var/messages is available here: http://pastebin.com/m6499846
as not to make this post any longer...
(The array has been down for about a month now. It is my home storage
server, non-critical, but I do not have a backup)

Also a printout of ‘mdadm --detail /dev/md0’ is available here:
http://pastebin.com/f44b6e069

I have used ‘mdadm -v -A -f /dev/md0’ to get the array online again, and can
read data (intact without errors) from the array, but it soon becomes
degraded again.

Any help on where to start would be greatly appreciated :)

Jack


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown), 
@ 2009-04-02  4:16 Lelsie Rhorer
  2009-04-02  4:22 ` David Lethe
                   ` (2 more replies)
  0 siblings, 3 replies; 63+ messages in thread
From: Lelsie Rhorer @ 2009-04-02  4:16 UTC (permalink / raw)
  To: linux-raid

I'm having a severe problem whose root cause I cannot determine.  I have a
RAID 6 array managed by mdadm running on Debian "Lenny" with a 3.2GHz AMD
Athlon 64 x 2 processor and 8G of RAM.  There are ten 1 Terabyte SATA
drives, unpartitioned, fully allocated to the /dev/md0 device. The drive
are served by 3 Silicon Image SATA port multipliers and a Silicon Image 4
port eSATA controller.  The /dev/md0 device is also unpartitioned, and all
8T of active space is formatted as a single Reiserfs file system.  The
entire volume is mounted to /RAID.  Various directories on the volume are
shared using both NFS and SAMBA.

Performance of the RAID system is very good.  The array can read and write
at over 450 Mbps, and I don't know if the limit is the array itself or the
network, but since the performance is more than adequate I really am not
concerned which is the case.

The issue is the entire array will occasionally pause completely for about
40 seconds when a file is created.  This does not always happen, but the
situation is easily reproducible.  The frequency at which the symptom
occurs seems to be related to the transfer load on the array.  If no other
transfers are in process, then the failure seems somewhat more rare,
perhaps accompanying less than 1 file creation in 10..  During heavy file
transfer activity, sometimes the system halts with every other file
creation.  Although I have observed many dozens of these events, I have
never once observed it to happen except when a file creation occurs. 
Reading and writing existing files never triggers the event, although any
read or write occurring during the event is halted for the duration. 
(There is one cron jog which runs every half-hour that creates a tiny file;
this is the most common failure vector.)  There are other drives formatted
with other file systems on the machine, but the issue has never been seen
on any of the other drives.  When the array runs its regularly scheduled
health check, the problem is much worse.  Not only does it lock up with
almost every single file creation, but the lock-up time is much longer -
sometimes in excess of 2 minutes.

Transfers via Linux based utilities (ftp, NFS, cp, mv, rsync, etc) all
recover after the event, but SAMBA based transfers frequently fail, both
reads and writes.

How can I troubleshoot and more importantly resolve this issue?


^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown), 
@ 2008-05-14 12:53 Henry, Andrew
  2008-05-14 21:13 ` David Greaves
  0 siblings, 1 reply; 63+ messages in thread
From: Henry, Andrew @ 2008-05-14 12:53 UTC (permalink / raw)
  To: linux-raid@vger.kernel.org

I'm new to software RAID and this list.  I read a few months of archives to see if I found answers but only partly...

I set up a raid1 set using 2xWD Mybook eSATA discs on a Sil CardBus controller.  I was not aware of automount rules and it didn't work, and I want to wipe it all and start again but cannot.  I read the thread listed in my subject and it helped me quite a lot but not fully.  Perhaps someone would be kind enough to help me the rest of the way.  This is what I have done:

1. badblocks -c 10240 -s -w -t random -v /dev/sd[ab]
2. parted /dev/sdX mklabel msdos ##on both drives
3a. parted /dev/sdX mkpart primary 0 500.1GB ##on both drives
3b. parted /dev/sdX set 1 raid on ##on both drives
4. mdadm --create --verbose /dev/md0 --metadata=1.0 --raid-devices=2 --level=raid1 --name=backupArray /dev/sd[ab]1
5. mdadm --examine --scan | tee /etc/mdadm.conf and set 'DEVICES partitions' so that I don't hard code any devide names that may change on reboot.
6. mdadm --assemble --name=mdBackup /dev/md0 ##assemble is run during --create it seems and this was not needed.
7. cryptsetup --verbose --verify-passphrase luksFormat /dev/md0
8. cryptsetup luksOpen /dev/md0 raid500
9. pvcreate /dev/mapper/raid500
10. vgcreate vgbackup /dev/mapper/raid500
11. lvcreate --name lvbackup --size 450G vgbackup ## check PEs first with vgdisplay
12. mkfs.ext3 -j -m 1 -O dir_index,filetype,sparse_super /dev/vgbackup/lvbackup
13. mkdir /mnt/raid500; mount /dev/vgbackup/lvbackup /mnt/raid500"

This worked perfectly.  Did not test but everything lokked fine and I could use the mount.  Thought: lets see if everything comes up at boot (yes, I had edited fstab to mount /dev/vgbackup/lvbackup and set crypttab to start luks on raid500.
Reboot failed.  Fsck could not check raid device and would not boot.  Kernel had not autodetected md0.  I now know this is because superblock format 1.0 puts metadata at end of device and therefore kernel cannot autodetect.
I started a LiveCD, mounted my root lvm, removed entries from fstab/crypttab and rebooted.  Reboot was now OK.
Now I tried to wipe the array so I can re-create with 0.9 metadata superblock.
I ran dd on sd[ab] for a few hundred megs, which wiped partitions.  I removed /etc/mdadm.conf.  I then repartitioned and rebooted.  I then tried to recreate the array with:

mdadm --create --verbose /dev/md0 --raid-devices=2 --level=raid1 /dev/sd[ab]1

but it reports that the devices are already part of an array and do I want to continue??  I say yes and it then immedialtely  says "out of sync, resyncing existing array" (not exact words but I suppose you get the idea)
I reboot to kill sync and then dd again, repartition, etc ect then reboot.
Now when server comes up, fdisk reports (it's the two 500GB discs that are in the array):

[root@k2 ~]# fdisk -l

Disk /dev/hda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          19      152586   83  Linux
/dev/hda2              20        9729    77995575   8e  Linux LVM

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1       60801   488384001   fd  Linux raid autodetect

Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       38913   312568641   83  Linux

Disk /dev/md0: 500.1 GB, 500105150464 bytes
2 heads, 4 sectors/track, 122095984 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Where previously, I had /dev/sdc that was the same as /dev/sda above (ignore the 320GB, that is separate and on boot, they sometimes come up in different order).
Now, I cannot write to sda above (500GB disc) with commands: dd, mdadm -zero-superblock etc etc.  I can write to md0 with dd but what the heck happened to sdc??  Why did it become /dev/md0??
Now I read the forum thread and ran dd on beginning and end of sda and md0 with /dev/zero using seek to skip first 490GB and deleted /dev/md0 then rebooted and now I see sda but there is no sdc or md0.
I cannot see any copy of mdadm.conf in /boot and initramfs-update does not work on CentOS, but I am more used to Debian and do not know the CentOS equivalent.  I do know that I have now completely dd'ed the first 10MB and last 2MB of sda and md0 and have deleted (with rm -f) /dev/md0, and now *only* /dev/sda (plus internal had and extra 320GB sdb) shows up in fdisk -l:  There is no md0 or sdc.

So after all that rambling, my question is:

Why did /dev/md0 appear in fdisk -l when it had previously been sda/sdb even after successfully creating my array before reboot?
How do I remove the array?  Have I now done everything to remove it?
I suppose (hope) that if I go to the server and power cycle it and the esata discs, my sdc probably will appear again ( I have not done this yet-no chance today) but why does it not appear after a soft reboot after having dd'd /dev/md0?


andrew henry
Oracle DBA

infra solutions|ao/bas|dba
Logica

^ permalink raw reply	[flat|nested] 63+ messages in thread
* RE:.
@ 2006-05-30  8:06 Jake White
  0 siblings, 0 replies; 63+ messages in thread
From: Jake White @ 2006-05-30  8:06 UTC (permalink / raw)
  To: linux-raid

Hello!
Our company Barcelo Travel Inc. seek enthusiastic, organised and alert individual to 
support our busy sales offices. If you live in germany our offer its good chance change your liife.
You must have excellent customer relations, communication and administration skills 
Successful candidates will be required to work in Main our Office for approximately 
one month.
To apply, please email CV to barcelotravinc@aol.com
regards,
Dominico Barcelo


^ permalink raw reply	[flat|nested] 63+ messages in thread
* Re: Re:
@ 2006-02-26  5:04 Norberto X. Milton
  0 siblings, 0 replies; 63+ messages in thread
From: Norberto X. Milton @ 2006-02-26  5:04 UTC (permalink / raw)
  To: linux-raid


Hey,

We have Special    0ff3rss      and some  New       Pr0ducctss.

WiDE variety of       pre_scr!p_t!0n       med!_c@-t!0ns     to choose fro=
m.

------------------------------------------------------------------------

copy the address below and paste in o your web browser:

afterdays.grindscull.net

------------------------------------------------------------------------


His workbook is wedged in the window,.
Make of my pass a road to the light=20.
It's autumn, 1991, and I'm sitting on the edge of the bed.
Of a tear that runs down an angel's face..
I had nothing further to do with them,.

Get back to you later,

Olen Labovitz

^ permalink raw reply	[flat|nested] 63+ messages in thread
* Re:
@ 2006-02-15  4:30 Hillary
  0 siblings, 0 replies; 63+ messages in thread
From: Hillary @ 2006-02-15  4:30 UTC (permalink / raw)
  To: linux-raid

Hello linux-raid@vger.kernel.org,

We sell brand-name and exact gen_ericcc equivalents.

Our prompt, courteous and disc#reet ser_vice will make you smile.

---------------------------------

copy the address below and paste in your web browser:

anorthite.techsmartjobs.com/?zz=3Dlowcost

----------------------------------



push the "Perform Currency Conversion" button..
Three hundred years in the deepest South:=20.
Gnarled, twisted, and curled, the nails yellow and claw-like, it is as if,=

^ permalink raw reply	[flat|nested] 63+ messages in thread
* (unknown)
@ 2006-01-11 14:47 bhess
  2006-01-12 11:16 ` David Greaves
  0 siblings, 1 reply; 63+ messages in thread
From: bhess @ 2006-01-11 14:47 UTC (permalink / raw)
  To: linux-raid

linux-raid@vger.kernel.org



I originally sent this to Neil Brown who suggested I sent it to you.







Any help would be appreciated.







Has anyone put an effort into building a raid 1 based on USB connected

drives under Redhat 3/4 not as the root/boot

drive. A year ago I don't think this made any sense but with the

price of drives being far less that then the equivalent tape media

and the simple USB to IDE smart cable I am evaluating an expandable

USB disk farm for two uses. A reasonable robust place to store

data until I can select what I want to put on tape. The second

is secondary storage for all of the family video tapes that I am

capturing in preparation for editing to DVD. The system does not

have to be fast just large, robust, expandable and cheep.



I currently run a Redhat sandbox with a hardware raid 5 and 4 120G

SATA drives. I have added USB drives and have them mount with

the LABEL=/PNAME option in fstab. In this manner they end up

in the right place after reboot. I do not know enough about

the Linux drive interface to know if USB attached devices will

get properly mounted into the raid at reboot and after changes

or additions of drives to the USB.



I am a retired Bell Labs Research supervisor. I was in Murray Hill

when UNIX was born and still use Intel based UNIX in the current

form of SCO Unixware both professionally and personally. Unixware

is no longer a viable product since I see no future in it and

Oracle is not supported. I know way to much about how the guts

of Unixware works thanks to a friend who was one of SCO's kernel

and storage designers. I know way to little how Linux works to

get a USB based raid up without a lot of research and tinkering.

I don't mind research and tinkering but I don't like reinventing

the wheel.



I have read The Software-RAID HOWTO by Jakob 0stergaard and

Emilio Bueso and downloaded mdadm. I have not tried it yet.





The system I have in mind uses a Intel server motherboard,

hardware raid 1 SATA root/boot/swap drive, SCSI tape drive

and a 4 port USB card. In a 2U chasses. A second 2U chassis

will contain a supply, up to 14 drives and lots of fans.

I have everything except the drives. The sole use of this system

will be a disk farm with a NFS and Samba server. It will run

under Redhat 3 or 4. I am leaning toward Redhat 4 since I

understand SCSI tape support is more stable under 4. Any

comment in this area would also be appreciated.



Can you point me in the direction of newer articles that cover

Linux raid using USB connected drives or do you have any

suggestions on the configuration of a system. My main concern

is how to get USB drives correctly put back in the raid after

boot and/or a USB change since I do not know how they are assigned

to /dev/sdxy in the first place and how USB hubs interact with

the assignments. I realize I should have other concerns and

just don't know enough. Ignorance is bliss, up to an init 6.



Thank You for your time.



Bill Hess



bhess@patmedia.net 


^ permalink raw reply	[flat|nested] 63+ messages in thread
[parent not found: <57GDJLHJLEAG07CI@vger.kernel.org>]
[parent not found: <4HCKFFJ3GIC1F340@vger.kernel.org>]
* (unknown)
@ 2002-06-04 15:47 Colonel
  2002-06-04 21:55 ` Jure Pecar
  0 siblings, 1 reply; 63+ messages in thread
From: Colonel @ 2002-06-04 15:47 UTC (permalink / raw)
  To: linux-raid, roy

From: Colonel <klink@clouddancer.com>
To: roy@karlsbakk.net
CC: linux-raid@vger.kernel.org
In-reply-to: <200206041259.g54CxuP07700@mail.pronto.tv> (message from Roy
	Sigurd Karlsbakk on Tue, 4 Jun 2002 14:59:55 +0200)
Subject: Re: SV: RAID-6 support in kernel?
Reply-to: klink@clouddancer.com
References: <2D0AFEFEE711D611923E009027D39F2B02F17E@nasexs1.meridian-data.com> <200206041259.g54CxuP07700@mail.pronto.tv>

   From: Roy Sigurd Karlsbakk <roy@karlsbakk.net>
   Organization: Pronto TV AS
   Date:	Tue, 4 Jun 2002 14:59:55 +0200
   Cc: Christian Vik <cvik@vanadis.no>, linux-kernel@vger.kernel.org,
	   linux-raid@vger.kernel.org
   Sender: linux-raid-owner@vger.kernel.org
   X-Mailing-List:	linux-raid@vger.kernel.org
   News-Group: list.kernel

   > Of course, for a 4 drive setup there's no reason to use RAID 6 at all (RAID
   > 10 will withstand any two drive failure if you only use 4 drives), but
   > that's the reasoning.  I think the best way to deal with the read-modify
   > write problem for RAID 6 is to use a small chunk size and deal with NxN
   > chunks as a unit.  But YMMV.

   RAID10 will _not_ withstand any two-drive fail in a 4-drive scenario. If D1 
   and D3 fail, you're fscked

   D1 D2
   D3 D4


True, I think that the point is that of the 5 possible 2 disk
failures, 2 of them (in striped mirrors, not mirrored stripes) kill
the array.  For RAID5, all of them kill the array.  But the fancy RAID
setups are for _large_ arrays, not 4 disks, unless you are after the
small write speed improvement (as I am).

Plus any raid metadevice made of metadevices cannot autostart, which
means tinkering during startup, which is only worth it for those large
drive arrays.


r

---
Personalities : [raid0] [raid1] 
read_ahead 1024 sectors
md0 : active raid0 md4[3] md3[2] md2[1] md1[0]
      34517312 blocks 64k chunks

^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2020-08-12 10:54 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-08  1:37 (unknown), Leslie Rhorer
2010-03-08  1:53 ` Neil Brown
2010-03-08  2:01   ` Leslie Rhorer
2010-03-08  2:22     ` Michael Evans
2010-03-08  3:20       ` Leslie Rhorer
2010-03-08  3:27         ` RAID5 - RAID6 Leslie Rhorer
2010-03-08  4:19           ` Michael Evans
2010-03-08  3:31         ` Michael Evans
2010-03-08  8:59           ` RAID5 - RAID6 Leslie Rhorer
2010-03-08  9:09             ` Michael Evans
  -- strict thread matches above, loose matches on Subject: below --
2020-08-12 10:54 Alex Anadi
2020-06-24 13:54 Re; test02
2017-11-13 14:55 Amos Kalonzo
2017-05-03  6:23 Re: H.A
2017-04-13 15:58 (unknown), Scott Ellentuch
     [not found] ` <CAK2H+efb3iKA5P3yd7uRqJomci6ENvrB1JRBBmtQEpEvyPMe7w@mail.gmail.com>
2017-04-13 16:38   ` Scott Ellentuch
2017-02-23 15:09 Qin's Yanjun
2016-11-06 21:00 (unknown), Dennis Dataopslag
2016-11-07 16:50 ` Wols Lists
2016-11-07 17:13   ` Re: Wols Lists
2016-11-17 20:33 ` Re: Dennis Dataopslag
2016-11-17 22:12   ` Re: Wols Lists
2015-09-30 12:06 Apple-Free-Lotto
2014-11-26 18:38 (unknown), Travis Williams
2014-11-26 20:49 ` NeilBrown
2014-11-29 15:08   ` Re: Peter Grandi
2012-12-25  0:12 (unknown), bobzer
2012-12-25  5:38 ` Phil Turmel
     [not found]   ` <CADzS=ar9c7hC1Z7HT9pTUEnoPR+jeo8wdexrrsFbVfPnZ9Tbmg@mail.gmail.com>
2012-12-26  2:15     ` Re: Phil Turmel
2012-12-26 11:29       ` Re: bobzer
2012-12-17  0:59 (unknown), Maik Purwin
2012-12-17  3:55 ` Phil Turmel
2011-09-26  4:23 (unknown), Kenn
2011-09-26  4:52 ` NeilBrown
2011-09-26  7:03   ` Re: Roman Mamedov
2011-09-26 23:23     ` Re: Kenn
2011-09-26  7:42   ` Re: Kenn
2011-09-26  8:04     ` Re: NeilBrown
2011-09-26 18:04       ` Re: Kenn
2011-09-26 19:56         ` Re: David Brown
2011-06-18 20:39 (unknown) Dragon
2011-06-19 18:40 ` Phil Turmel
2011-06-10 20:26 (unknown) Dragon
2011-06-11  2:06 ` Phil Turmel
2011-06-09 12:16 (unknown) Dragon
2011-06-09 13:39 ` Phil Turmel
2011-06-09  6:50 (unknown) Dragon
2011-06-09 12:01 ` Phil Turmel
2011-04-10  1:20 Re: Young Chang
2010-11-13  6:01 (unknown), Mike Viau
2010-11-13 19:36 ` Neil Brown
2010-01-06 14:19 (unknown) Lapohos Tibor
2010-01-06 20:21 ` Michael Evans
2010-01-06 20:57   ` Re: Antonio Perez
2009-06-05  0:50 (unknown), Jack Etherington
2009-06-05  1:18 ` Roger Heflin
2009-04-02  4:16 (unknown), Lelsie Rhorer
2009-04-02  4:22 ` David Lethe
2009-04-05  0:12   ` RE: Lelsie Rhorer
2009-04-05  0:38     ` Greg Freemyer
2009-04-05  5:05       ` Lelsie Rhorer
2009-04-05 11:42         ` Greg Freemyer
2009-04-05  0:45     ` Re: Roger Heflin
2009-04-05  5:21       ` Lelsie Rhorer
2009-04-05  5:33         ` RE: David Lethe
2009-04-02  7:33 ` Peter Grandi
2009-04-02 13:35 ` Re: Andrew Burgess
2008-05-14 12:53 (unknown), Henry, Andrew
2008-05-14 21:13 ` David Greaves
2006-05-30  8:06 Jake White
2006-02-26  5:04 Norberto X. Milton
2006-02-15  4:30 Re: Hillary
2006-01-11 14:47 (unknown) bhess
2006-01-12 11:16 ` David Greaves
2006-01-12 17:20   ` Re: Ross Vandegrift
2006-01-17 12:12     ` Re: David Greaves
     [not found] <57GDJLHJLEAG07CI@vger.kernel.org>
2005-07-24 10:31 ` Re: jfire
     [not found] <4HCKFFJ3GIC1F340@vger.kernel.org>
2005-05-30  2:49 ` Re: bouche
2002-06-04 15:47 (unknown) Colonel
2002-06-04 21:55 ` Jure Pecar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).