* USB chasis fails, fate of the software raid?
@ 2017-10-18 19:04 dfc
2017-10-18 20:28 ` Rudy Zijlstra
0 siblings, 1 reply; 5+ messages in thread
From: dfc @ 2017-10-18 19:04 UTC (permalink / raw)
To: linux-raid
I have an Orico (3559SUSJ3) external 5 bay
hard disk drive enclosure attached via
USB to my desktop computer.
It holds 5 * 4 TB disks in raid5 and
has functioned for a couple of years.
Then the power supply (or fan
or some internal component of the chasis)
failed and -- from one boot to the next --
the chasis is DOA, i.e. it does not even power
on the disks.
When last used, the drives were all active and
in good shape; I copy below the output of the
mdadm --detail list
just a short time before the failure.
My main question is whether inserting
the 5 disks into another JBOD chasis
is likely to work? In particular is
mdadm "smart enough" to find all
the components and/or do I need to do
something to help it along? In fact,
our IT person has suggested that the
data may well be lost.
Details:
raid5 (no spares)
20 TB
XFS filesystem
Debian 3.16.0-4-amd64
Also, since I first used the JBOD with software
raid I came to realize that the usb connection
is not a good choice (though I have never had
problems). What sort of connection (at the low
price end of the market for HDD enclosures)
would be better? eSata or something else?
Many thanks for answers/suggestions.
David
-----------------------------------------------------------
OUTPUT of mdadm --detail list
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 954b2546:5c467e9c:a4eb74e3:27dad837
Name : impala:0
Creation Time : Fri May 22 15:32:31 2015
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 7813771264 (3725.90 GiB 4000.65 GB)
Array Size : 15627542528 (14903.59 GiB 16002.60 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : f0cda836:8c1c28d1:53710d20:db8d088a
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Sep 2 18:54:20 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 3d0d8085 - correct
Events : 197036
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 4
Array State : AAAAA ('A' == active, '.' == missing, 'R' ==
replacing)
/dev/sde1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 954b2546:5c467e9c:a4eb74e3:27dad837
Name : impala:0
Creation Time : Fri May 22 15:32:31 2015
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 7813771264 (3725.90 GiB 4000.65 GB)
Array Size : 15627542528 (14903.59 GiB 16002.60 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 2e1a57ff:f892fb23:1f698390:53dd98e3
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Sep 2 18:54:20 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 22958a72 - correct
Events : 197036
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAAA ('A' == active, '.' == missing, 'R' ==
replacing)
/dev/sdf1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 954b2546:5c467e9c:a4eb74e3:27dad837
Name : impala:0
Creation Time : Fri May 22 15:32:31 2015
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 7813771264 (3725.90 GiB 4000.65 GB)
Array Size : 15627542528 (14903.59 GiB 16002.60 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 3be7bbb4:4e5f07e3:f78f3c31:5bd6df6b
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Sep 2 18:54:20 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : c00643b3 - correct
Events : 197036
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAAA ('A' == active, '.' == missing, 'R' ==
replacing)
/dev/sdg1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 954b2546:5c467e9c:a4eb74e3:27dad837
Name : impala:0
Creation Time : Fri May 22 15:32:31 2015
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 7813771264 (3725.90 GiB 4000.65 GB)
Array Size : 15627542528 (14903.59 GiB 16002.60 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 2b09eed0:0a6ead54:48671d28:0abd1b6e
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Sep 2 18:54:20 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 46fb325e - correct
Events : 197036
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAAA ('A' == active, '.' == missing, 'R' ==
replacing)
/dev/sdh1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 954b2546:5c467e9c:a4eb74e3:27dad837
Name : impala:0
Creation Time : Fri May 22 15:32:31 2015
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 7813771264 (3725.90 GiB 4000.65 GB)
Array Size : 15627542528 (14903.59 GiB 16002.60 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 324b49de:233d8769:7f75afad:dddb0ec8
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Sep 2 18:54:20 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 48b57088 - correct
Events : 197036
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAAA ('A' == active, '.' == missing, 'R' ==
replacing)
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: USB chasis fails, fate of the software raid?
2017-10-18 19:04 USB chasis fails, fate of the software raid? dfc
@ 2017-10-18 20:28 ` Rudy Zijlstra
2017-10-19 0:06 ` dfc
0 siblings, 1 reply; 5+ messages in thread
From: Rudy Zijlstra @ 2017-10-18 20:28 UTC (permalink / raw)
To: dfc, linux-raid
Hi David,
Op 18/10/2017 om 21:04 schreef dfc:
> I have an Orico (3559SUSJ3) external 5 bay
> hard disk drive enclosure attached via
> USB to my desktop computer.
USB is indeed not the best choice.
> It holds 5 * 4 TB disks in raid5 and
> has functioned for a couple of years.
<snip>
> My main question is whether inserting
> the 5 disks into another JBOD chasis
> is likely to work?
Yes, this should work. With a little luck you would not even need to
force it, as with a component failure all disks should have lost
connection at the same time.
The biggest risk is that the failure caused spikes before final death,
causing damage on the HDD. But without knowing the actual failure that
is pure conjecture.
You could test each disk separately -- as long as you only read from
them. Only reading is to ensure that no changes are done that make later
re-build of the raid difficult.
> In particular is
> mdadm "smart enough" to find all
> the components and/or do I need to do
> something to help it along? In fact,
> our IT person has suggested that the
> data may well be lost.
Your IT person apparently either thinks the HW failure caused extensive
damage, or wants you to do something different, or... is not very
knowledgeable about this type of setup.
<snip>
> Also, since I first used the JBOD with software
> raid I came to realize that the usb connection
> is not a good choice (though I have never had
> problems). What sort of connection (at the low
> price end of the market for HDD enclosures)
> would be better? eSata or something else?
eSATA would work. But depends on the type of enclusure you can get. I
leave that to people based in USA, who know what is available there.
Cheers
Rudy
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: USB chasis fails, fate of the software raid?
2017-10-18 20:28 ` Rudy Zijlstra
@ 2017-10-19 0:06 ` dfc
2017-10-19 0:44 ` Phil Turmel
0 siblings, 1 reply; 5+ messages in thread
From: dfc @ 2017-10-19 0:06 UTC (permalink / raw)
To: Rudy Zijlstra, linux-raid
On Wed, 2017-10-18 at 22:28 +0200, Rudy Zijlstra wrote:
> Hi David,
>
>
Hi Rudy,
> Op 18/10/2017 om 21:04 schreef dfc:
> > I have an Orico (3559SUSJ3) external 5 bay
> > hard disk drive enclosure attached via
> > USB to my desktop computer.
>
> USB is indeed not the best choice.
>
> > It holds 5 * 4 TB disks in raid5 and
> > has functioned for a couple of years.
>
> <snip>
> > My main question is whether inserting
> > the 5 disks into another JBOD chasis
> > is likely to work?
>
> Yes, this should work. With a little luck you would not even need to
> force it, as with a component failure all disks should have lost
> connection at the same time.
> The biggest risk is that the failure caused spikes before final
> death,
> causing damage on the HDD. But without knowing the actual failure
> that
> is pure conjecture.
> You could test each disk separately -- as long as you only read from
> them. Only reading is to ensure that no changes are done that make
> later
Thanks, I will try reading disks and then install in a new
chasis.
> re-build of the raid difficult.
> > In particular is
> > mdadm "smart enough" to find all
> > the components and/or do I need to do
> > something to help it along? In fact,
> > our IT person has suggested that the
> > data may well be lost.
>
> Your IT person apparently either thinks the HW failure caused
> extensive
> damage, or wants you to do something different, or... is not very
> knowledgeable about this type of setup.
>
> <snip>
> > Also, since I first used the JBOD with software
> > raid I came to realize that the usb connection
> > is not a good choice (though I have never had
> > problems). What sort of connection (at the low
> > price end of the market for HDD enclosures)
> > would be better? eSata or something else?
>
> eSATA would work. But depends on the type of enclusure you can get.
> I
> leave that to people based in USA, who know what is available there.
>
> Cheers
>
> Rudy
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"
> in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: USB chasis fails, fate of the software raid?
2017-10-19 0:06 ` dfc
@ 2017-10-19 0:44 ` Phil Turmel
2017-10-19 17:04 ` dfc
0 siblings, 1 reply; 5+ messages in thread
From: Phil Turmel @ 2017-10-19 0:44 UTC (permalink / raw)
To: dfc, Rudy Zijlstra, linux-raid
On 10/18/2017 08:06 PM, dfc wrote:
> On Wed, 2017-10-18 at 22:28 +0200, Rudy Zijlstra wrote:
>> USB is indeed not the best choice.
Rudy, you master of understatement!
>>> My main question is whether inserting the 5 disks into another
>>> JBOD chasis is likely to work?
>>
>> Yes, this should work. With a little luck you would not even need
>> to force it, as with a component failure all disks should have
>> lost connection at the same time.
I would be shocked if these disk *won't* assemble. Assembly might need
--force if any writes were in flight, but I doubt it.
>> You could test each disk separately -- as long as you only read
>> from them. Only reading is to ensure that no changes are done that
>> make later
>
> Thanks, I will try reading disks and then install in a new chasis.
Just put them in a chassis and try to assemble. If it succeeds, you are
done. If assembly fails, report what mdadm had to say.
>> re-build of the raid difficult.
>>> In particular is mdadm "smart enough" to find all the components
>>> and/or do I need to do something to help it along? In fact, our
>>> IT person has suggested that the data may well be lost.
>>
>> Your IT person apparently either thinks the HW failure caused
>> extensive damage, or wants you to do something different, or... is
>> not very knowledgeable about this type of setup.
Yeah, it does sound like somebody is out of their depth.
>>> Also, since I first used the JBOD with software raid I came to
>>> realize that the usb connection is not a good choice (though I
>>> have never had problems). What sort of connection (at the low
>>> price end of the market for HDD enclosures) would be better?
>>> eSata or something else?
>>
>> eSATA would work. But depends on the type of enclusure you can get.
>> I leave that to people based in USA, who know what is available
>> there.
eSATA with a port multiplier should be fine. It's spend a bit more for
a JBOD with a SAS port and a suitable PCIe adapter.
Phil
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: USB chasis fails, fate of the software raid?
2017-10-19 0:44 ` Phil Turmel
@ 2017-10-19 17:04 ` dfc
0 siblings, 0 replies; 5+ messages in thread
From: dfc @ 2017-10-19 17:04 UTC (permalink / raw)
To: Phil Turmel, Rudy Zijlstra, linux-raid
Phil, Thanks for your comments below.
On Wed, 2017-10-18 at 20:44 -0400, Phil Turmel wrote:
> On 10/18/2017 08:06 PM, dfc wrote:
> > On Wed, 2017-10-18 at 22:28 +0200, Rudy Zijlstra wrote:
> > > USB is indeed not the best choice.
>
> Rudy, you master of understatement!
>
> > > > My main question is whether inserting the 5 disks into another
> > > > JBOD chasis is likely to work?
> > >
> > > Yes, this should work. With a little luck you would not even
> > > need
> > > to force it, as with a component failure all disks should have
> > > lost connection at the same time.
>
> I would be shocked if these disk *won't* assemble. Assembly might
> need
> --force if any writes were in flight, but I doubt it.
>
> > > You could test each disk separately -- as long as you only read
> > > from them. Only reading is to ensure that no changes are done
> > > that
> > > make later
> >
> > Thanks, I will try reading disks and then install in a new chasis.
>
> Just put them in a chassis and try to assemble. If it succeeds, you
> are
> done. If assembly fails, report what mdadm had to say.
Will do
>
> > > re-build of the raid difficult.
> > > > In particular is mdadm "smart enough" to find all the
> > > > components
> > > > and/or do I need to do something to help it along? In fact,
> > > > our
> > > > IT person has suggested that the data may well be lost.
> > >
> > > Your IT person apparently either thinks the HW failure caused
> > > extensive damage, or wants you to do something different, or...
> > > is
> > > not very knowledgeable about this type of setup.
>
> Yeah, it does sound like somebody is out of their depth.
>
> > > > Also, since I first used the JBOD with software raid I came to
> > > > realize that the usb connection is not a good choice (though I
> > > > have never had problems). What sort of connection (at the low
> > > > price end of the market for HDD enclosures) would be better?
> > > > eSata or something else?
> > >
> > > eSATA would work. But depends on the type of enclusure you can
> > > get.
> > > I leave that to people based in USA, who know what is available
> > > there.
>
> eSATA with a port multiplier should be fine. It's spend a bit more
> for
> a JBOD with a SAS port and a suitable PCIe adapter.
Thank you, I will hunt for JBOD+SAS.
I need 5 or perhaps 6 disks in the array and am
still searching for a suitable unit.
David
>
> Phil
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"
> in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2017-10-19 17:04 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-10-18 19:04 USB chasis fails, fate of the software raid? dfc
2017-10-18 20:28 ` Rudy Zijlstra
2017-10-19 0:06 ` dfc
2017-10-19 0:44 ` Phil Turmel
2017-10-19 17:04 ` dfc
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).