* Advice for recovering array containing LUKS encrypted LVM volumes
@ 2013-08-04 5:49 P Orrifolius
2013-08-04 13:09 ` Stan Hoeppner
0 siblings, 1 reply; 13+ messages in thread
From: P Orrifolius @ 2013-08-04 5:49 UTC (permalink / raw)
To: linux-raid
Hello.
I have an 8 device RAID6. There are 4 drives on each of two
controllers and it looks like one of the controllers failed
temporarily. The system has been rebooted and all the individual
drives are available again but the array has not auto-assembled,
presumably because the Events count is different... 92806 on 4 drives,
92820 on the other 4.
And of course the sick feeling in my stomach tells me that I haven't
got recent backups of all the data on there.
What is the best/safest way to try and get the array up and working
again? Should I just work through
https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID
?
Is there anything special I can or should do given the raid is holding
encrypted LVM volumes? The array is the only PV in a VG holding LVs
that are LUKS encrypted, within which are (mainly) XFS filesystems.
The LVs/filesystems with the data I'd be most upset about losing
weren't decrypted/mounted at the time. Is that likely to improve the
odds of recovery?
Thanks for your help.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-08-04 5:49 Advice for recovering array containing LUKS encrypted LVM volumes P Orrifolius
@ 2013-08-04 13:09 ` Stan Hoeppner
2013-08-06 1:54 ` P Orrifolius
0 siblings, 1 reply; 13+ messages in thread
From: Stan Hoeppner @ 2013-08-04 13:09 UTC (permalink / raw)
To: P Orrifolius; +Cc: linux-raid
On 8/4/2013 12:49 AM, P Orrifolius wrote:
> I have an 8 device RAID6. There are 4 drives on each of two
> controllers and it looks like one of the controllers failed
> temporarily.
Are you certain the fault was caused by HBA? Hardware doesn't tend to
fail temporarily. It does often fail intermittently, before complete
failure. If you're certain it's the HBA you should replace it before
attempting to bring the array back up.
Do you have 2 SFF8087 cables connected to two backplanes, or do you have
8 discrete SATA cables connected directly to the 8 drives? WRT the set
of 4 drives that dropped, do these four share a common power cable to
the PSU that is not shared by the other 4 drives? The point of these
questions is to make sure you know the source of the problem before
proceeding. It could be the HBA, but it could also be a
power/cable/connection problem, a data/cable/connection problem, or a
failed backplane. Cheap backplanes, i.e. cheap hotswap drive cages
often cause such intermittent problems as you've described here.
> The system has been rebooted and all the individual
> drives are available again but the array has not auto-assembled,
> presumably because the Events count is different... 92806 on 4 drives,
> 92820 on the other 4.
>
> And of course the sick feeling in my stomach tells me that I haven't
> got recent backups of all the data on there.
Given the nature of the failure you shouldn't have lost or had corrupted
but a single stripe or maybe a few stripes. Lets hope this did not
include a bunch of XFS directory inodes.
> What is the best/safest way to try and get the array up and working
> again? Should I just work through
> https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID
Again, get the hardware straightened out first or you'll continue to
have problems.
Once that's accomplished, skip to the "Force assembly" section in the
guide you referenced. You can ignore the preceding $OVERLAYS and disk
copying steps because you know the problem wasn't/isn't the disks.
Simply force assembly.
> Is there anything special I can or should do given the raid is holding
> encrypted LVM volumes? The array is the only PV in a VG holding LVs
> that are LUKS encrypted, within which are (mainly) XFS filesystems
Due to the nature of the failure, which was 4 drives simultaneously
going off line and potentially having partial stripes written, the only
thing you can do is force assembly and clean up the damage, if there is
any. Best case scenario is that XFS journal replay works, and you maybe
have a few zero length files if any were being modified in place at the
time of the event. Worse case scenario is directory inodes were being
written and journal replay doesn't recover the damaged inodes.
Any way you slice it, you simply have to cross your fingers and go. If
you didn't have many writes in flight at the time of the failure, you
should come out of this ok. You stated multiple XFS filesystems. Some
may be fine, others damaged. Depends on what, if anything, was being
written at the time.
> The LVs/filesystems with the data I'd be most upset about losing
> weren't decrypted/mounted at the time. Is that likely to improve the
> odds of recovery?
Any filesystem that wasn't mounted should not have been touched by this
failure. The damage should be limited to the filesystem(s) atop the
stripe(s) that were being flushed at the time of the failure. From your
description, I'd think the damage should be pretty limited, again
assuming you had few writes in flight at the time.
--
Stan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-08-04 13:09 ` Stan Hoeppner
@ 2013-08-06 1:54 ` P Orrifolius
2013-08-06 19:37 ` Stan Hoeppner
0 siblings, 1 reply; 13+ messages in thread
From: P Orrifolius @ 2013-08-06 1:54 UTC (permalink / raw)
To: stan; +Cc: linux-raid
Thanks for your response...
On 5 August 2013 01:09, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> On 8/4/2013 12:49 AM, P Orrifolius wrote:
>
>> I have an 8 device RAID6. There are 4 drives on each of two
>> controllers and it looks like one of the controllers failed
>> temporarily.
>
> Are you certain the fault was caused by HBA? Hardware doesn't tend to
> fail temporarily. It does often fail intermittently, before complete
> failure. If you're certain it's the HBA you should replace it before
> attempting to bring the array back up.
>
> Do you have 2 SFF8087 cables connected to two backplanes, or do you have
> 8 discrete SATA cables connected directly to the 8 drives? WRT the set
> of 4 drives that dropped, do these four share a common power cable to
> the PSU that is not shared by the other 4 drives?
The full setup, an el-cheapo rig used for media, backups etc at home, is:
8x2TB SATA drives, split across two Vantec NexStar HX4 enclosures.
These separately powered enclosures have a single USB3 plug and a
single eSATA plug. The documentation states that a "Port Multiplier
Is Required For eSATA".
The original intention was to connect them via eSATA directly to my
motherboard. Subsequently I determined that my motherboard only
supports command-based not FIS. I had a look for a FIS
port-multiplier card but USB3 (which my motherboard doesn't support)
controllers seemed about a 1/4 the price so I thought I'd try that
out. lsusb tells me that there are JMicron USB3-to-ATA bridges in the
enclosures.
So, each enclosure is actually connected by a single USB3 connection
to one of two ports on a single controller.
Logs show that all 4 drives connected to one of the ports were reset
by the XHCI driver (more or less simultaneously) losing the drives and
failing the array. In the original failure they were back with the
same /dev/sd? in a few minutes, but I guess the Event count had
diverged already.
Perhaps that suggests the enclosure bridge is at fault, unless an
individual port on the controller freaked out. Definitely not a power
failure, could be a USB3 cable issue I guess.
> The point of these
> questions is to make sure you know the source of the problem before
> proceeding. It could be the HBA, but it could also be a
> power/cable/connection problem, a data/cable/connection problem, or a
> failed backplane. Cheap backplanes, i.e. cheap hotswap drive cages
> often cause such intermittent problems as you've described here.
Truth is the USB3 has been a bit of a pain anyway... the enclosure
bridge seems to prevent direct fdisk'ing and SMART at least. My
biggest concern was that it spits out copious 'needs
XHCI_TRUST_TX_LENGTH quirk?' warnings.
But I burned it in with a few weeks of read/write/validate work
without any apparent negative consequence and it's been fine for about
a year of uptime under light-moderate workload. My trust was perhaps
misplaced.
>> What is the best/safest way to try and get the array up and working
>> again? Should I just work through
>> https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID
>
> Again, get the hardware straightened out first or you'll continue to
> have problems.
It seems I'd probably be better of going to eSATA... any
recommendations on port multipying controllers?
Is the Highpoint RocketRAID 622 ok? More expensive than I'd like but
one of the few options that doesn't involve waiting on international
shipping.
>
> Once that's accomplished, skip to the "Force assembly" section in the
> guide you referenced. You can ignore the preceding $OVERLAYS and disk
> copying steps because you know the problem wasn't/isn't the disks.
> Simply force assembly.
Good news is I worked through the recovery instructions, including
setting up the overlays (due to an excess of paranoia), and I was able
to mount each XFS filesystem and get a seemingly good result from
xfs_repair -n.
Haven't managed to get my additional backups up to date yet due to USB
reset happening again whilst trying but I presume the data will be
ok... once I can get to it.
Thanks.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-08-06 1:54 ` P Orrifolius
@ 2013-08-06 19:37 ` Stan Hoeppner
2013-08-07 2:22 ` P Orrifolius
2013-08-07 7:34 ` P Orrifolius
0 siblings, 2 replies; 13+ messages in thread
From: Stan Hoeppner @ 2013-08-06 19:37 UTC (permalink / raw)
To: P Orrifolius; +Cc: linux-raid
On 8/5/2013 8:54 PM, P Orrifolius wrote:
> Thanks for your response...
I'm glad I asked the crucial questions.
> 8x2TB SATA drives, split across two Vantec NexStar HX4 enclosures.
> These separately powered enclosures have a single USB3 plug and a
> single eSATA plug. The documentation states that a "Port Multiplier
> Is Required For eSATA".
These enclosures are unsuitable for RAID duty for multiple reasons
including insufficient power delivery, very poor airflow thus poor drive
cooling, and USB connectivity. USB is unsuitable for RAID duty due to
the frequent resets. USB is unsuitable for carrying block device
traffic reliably due to resets, even for single drives. Forget about
using USB.
...
> Subsequently I determined that my motherboard only
> supports command-based not FIS. I had a look for a FIS
> port-multiplier card but USB3 (which my motherboard doesn't support)
> controllers seemed about a 1/4 the price so I thought I'd try that
> out. lsusb tells me that there are JMicron USB3-to-ATA bridges in the
> enclosures.
Forget about enclosures with SATA PMP ASICs as well. Most products
integrating them are also cheap consumer oriented stuff and not suitable
for reliable RAID operation. You'd likely continue to have problems of
a similar nature using the eSATA connections, though probably less
frequently.
Newegg sells these Vantecs for $100 USD each. For not substantially
more than what you spent on two of these and the USB3 controller, given
the increase in reliability, performance, and stability, you could have
gone with a combination of quality consumer and enterprise parts to
achieve a fast and stable RAID. You could have simply swapped your PC
chassis and PSU, acquired a top notch 8 port SAS/SATA HBA, two SFF8087
to discrete SATA breakout cables, and 3 Molex to SATA power splitters.
For example:
http://www.newegg.com/Product/Product.aspx?Item=N82E16811124156
http://www.newegg.com/Product/Product.aspx?Item=N82E16817170018
http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112
http://www.newegg.com/Product/Product.aspx?Item=N82E16816132040 x2
http://www.newegg.com/Product/Product.aspx?Item=N82E16812422777 x3
Total: $355 USD
This would give you plenty of +12VDC power for all drives concurrently
(unlike the Vantec wall wart transformers), rock solid stability, full
bandwidth to/from all drives, and at least 20x better drive cooling than
the Vantecs.
...
> Logs show that all 4 drives connected to one of the ports were reset
> by the XHCI driver (more or less simultaneously) losing the drives and
> failing the array.
...
> Perhaps that suggests the enclosure bridge is at fault, unless an
> individual port on the controller freaked out. Definitely not a power
> failure, could be a USB3 cable issue I guess.
No, this is just the nature of running storage protocols over USB.
There's a nice thread in the XFS list archives of a user experiencing
filesystem corruption multiple times due to USB resets, with a single
external USB drive.
> Truth is the USB3 has been a bit of a pain anyway... the enclosure
> bridge seems to prevent direct fdisk'ing and SMART at least. My
> biggest concern was that it spits out copious 'needs
> XHCI_TRUST_TX_LENGTH quirk?' warnings.
> But I burned it in with a few weeks of read/write/validate work
> without any apparent negative consequence and it's been fine for about
> a year of uptime under light-moderate workload. My trust was perhaps
> misplaced.
Definitely misplaced. I wish you'd have asked here before going the USB
route and buying those cheap Vantec JBOD enclosures. I could have saved
you some heartache and wasted money. Build a reliable solution similar
to what I described above and Ebay those Vantecs.
> It seems I'd probably be better of going to eSATA... any
> recommendations on port multipying controllers?
First, I recommend you not use eSATA products for RAID as most are not
designed for it, including these Vantecs. Second, you seem to be
confused about how/where SATA Port Multipliers are implemented. PMPs
are not PCIe HBAs. They are ASICs, i.e. individual chips, nearly always
implemented in the circuit board logic inside a JBOD storage enclosure,
i.e. at the far end of the eSATA cable. The PMP in essence turns that
one eSATA cable between the boxes into 4 or 5 'cables' inside the JBOD
chassis connecting to all the drives, usually implemented as traces on a
backplane PCB.
There are standalone PCB based PMPs that can be mounted into a PCI/e
bracket opening, but they do not plug into the PCI/e slot, and they do
not provide the same function as in the typical scenario above. Power
comes from a PSU Molex plug, and data comes from a single SATA cable
connected to one port of a standard SATA HBA or mobo Southbridge SATA
port. As the name implies, the device multiplies one port into many, in
this scenario usually 1:5. Here you're creating 5 drive ports from one
HBA port but strictly inside the host chassis. There can be no external
cabling, no eSATA, as the signalling voltage needs to be slightly higher
for eSATA. All such PMP devices I'm aware of use the Silicon Image 3726
ASIC. For example this Addonics product:
http://www.addonics.com/products/ad5sapm.php
Two of these will give you 10 drive ports from a 2 port HBA or two mobo
ports, but again only inside the PC chassis. This solution costs ~$150
and limits you to 600MB/s aggregate drive bandwidth. Since most people
using this type of PMP device are most often using a really cheap 2 port
Sil based HBA, it may not tend to be reliable, and definitely not speedy.
It makes more sense to spend 50% more, $75, for the native 8 port LSI
enterprise SAS/SATA HBA for guaranteed stability/reliability and the
available 4GB/s aggregate drive bandwidth. With 8 of the fastest rusty
drives on the market you can achieve 1+ GB/s throughput with this LSI.
With the same drives on two Sil PMPs and a cheap HBA you'd be hard
pressed to achieve 400MB/s with PCIe x1 2.0.
> Is the Highpoint RocketRAID 622 ok?
I strongly suggest you dump the Vantecs and avoid all eSATA solutions.
The PMP+firmware in the Vantecs is not designed for nor tested for RAID
use, just as the rest of the product is not designed for RAID use. In
addition their (lack of) documentation in what they consider a user
manual makes this abundantly clear.
> Good news is I worked through the recovery instructions, including
> setting up the overlays (due to an excess of paranoia), and I was able
> to mount each XFS filesystem and get a seemingly good result from
> xfs_repair -n.
You can't run xfs_repair on a mounted filesystem, so I assume you simply
have the order of events reversed here. "Seemingly" makes me
wonder/worry. If errors are found they are noted in the output. There
is no ambiguity. "-n" means no modify, so any errors that might have
been found, and displayed, were not corrected.
Note that xfs_repair only checks the filesystem structure for errors.
It does not check files for damage. You should manually check any files
that were being modified within a few minutes before or during the USB
reset and subsequent array failure, to make sure they were not zeroed,
truncated, or mangled in some other way.
> Haven't managed to get my additional backups up to date yet due to USB
> reset happening again whilst trying but I presume the data will be
> ok... once I can get to it.
The solution I described above is likely more than you wish to spend to
fix this problem, but it is a tiny sum to part with considering the
stability and performance you'll gain. Add the fact that once you have
this up and working it should be rock solid and you won't have to screw
with it again.
You could reduce that ~$355 USD to ~$155 by substituting something like
two of these for the LSI HBA
http://www.newegg.com/Product/Product.aspx?Item=N82E16816124064
though I can't attest to any qualities of this card. If they're decent
these will work better direct connected to the drives than any eSATA
card connected to those Vantecs.
With cost cutting in mind, consider that list member Ramon Hofer spent
~$500 USD on merely an LSI HBA and Intel SAS expander not that long ago,
based on my architectural recommendation, to square away his 20 drive
server. If you ask him I'm sure he'd say he's very glad to have spent a
little extra on the HBA+expander hardware given the stability and
performance. And I'm sure he'd express similar appreciation for the XFS
over concatenated constituent RAID5 arrays architecture I recommended.
Here's the most recent list thread concerning his system:
http://www.spinics.net/lists/raid/msg43410.html
I hope the advice I've given here is beneficial, and not too preachy. I
didn't intend for this to be so long. I think this is primarily due to
where you are and where you need to be. The journey through storage
education requires more than a few steps, and I've covered but a tiny
fraction of it here.
--
Stan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-08-06 19:37 ` Stan Hoeppner
@ 2013-08-07 2:22 ` P Orrifolius
2013-08-08 20:17 ` Stan Hoeppner
2013-08-07 7:34 ` P Orrifolius
1 sibling, 1 reply; 13+ messages in thread
From: P Orrifolius @ 2013-08-07 2:22 UTC (permalink / raw)
To: stan; +Cc: linux-raid
On 7 August 2013 07:37, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> On 8/5/2013 8:54 PM, P Orrifolius wrote:
> Newegg sells these Vantecs for $100 USD each. For not substantially
> more than what you spent on two of these and the USB3 controller, given
> the increase in reliability, performance, and stability, you could have
> gone with a combination of quality consumer and enterprise parts to
> achieve a fast and stable RAID. You could have simply swapped your PC
> chassis and PSU, acquired a top notch 8 port SAS/SATA HBA, two SFF8087
> to discrete SATA breakout cables, and 3 Molex to SATA power splitters.
> For example:
>
> http://www.newegg.com/Product/Product.aspx?Item=N82E16811124156
> http://www.newegg.com/Product/Product.aspx?Item=N82E16817170018
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816132040 x2
> http://www.newegg.com/Product/Product.aspx?Item=N82E16812422777 x3
>
> Total: $355 USD
>
> This would give you plenty of +12VDC power for all drives concurrently
> (unlike the Vantec wall wart transformers), rock solid stability, full
> bandwidth to/from all drives, and at least 20x better drive cooling than
> the Vantecs.
Thanks for taking the time to make these recommendations, I will
certainly reassess that style of solution.
Just to give a little economic context, so as not to appear heedless
of your advice, the relative prices of things are a bit different
here.
A Vantec costs about USD125, the LSI about USD360. Anything vaguely
enterprise-y or unusual tends to attract a larger markup. And if you
take purchasing power into account that LSI probably equates to about
USD650 of beer/groceries... not directly relevant but it does mean the
incremental cost is harder to bear.
All that said data loss is grim and, by the sounds of your email, just
switching to eSATA won't help me much in terms of reliability.
Putting cost aside, and the convenience of having occasionally just
taken the two enclosures elsewhere for a few days, my biggest problem
is an adequately sized and ventilated case.
Ideally it would have 10 3.5" bays, 8 for the RAID6 and 2 for the
RAID1 holding the system. Though once the RAID6 isn't 'mobile' I
guess I could do away with the existing RAID1 pair and shuffle the
system onto the 8 drives... quite a mission though.
The Enermax looks good, especially regarding ventilation, but is not
available here. I know from having 6 drives in 6 bays with a single
120mm HDD cage fan in my existing case that some drives get very hot.
Unfortunately I can't find anything like the Enermax locally, I'll
keep looking though.
Anyway, thanks for prompting the reassessment.
>> Good news is I worked through the recovery instructions, including
>> setting up the overlays (due to an excess of paranoia), and I was able
>> to mount each XFS filesystem and get a seemingly good result from
>> xfs_repair -n.
>
> You can't run xfs_repair on a mounted filesystem, so I assume you simply
> have the order of events reversed here.
I mounted, to make sure the log was replayed, then unmounted, then
xfs_repair'd... I forgot to mention the unmounting.
> "Seemingly" makes me
> wonder/worry. If errors are found they are noted in the output. There
> is no ambiguity. "-n" means no modify, so any errors that might have
> been found, and displayed, were not corrected.
I was imprecise... xfs_repair -n exited with 0 for each filesystem, so
that was a good result. 'Seemingly' because I didn't believe that it
had actually checked 8TB of data in the time it took... as you say the
data may still be mangled.
> The solution I described above is likely more than you wish to spend to
> fix this problem, but it is a tiny sum to part with considering the
> stability and performance you'll gain. Add the fact that once you have
> this up and working it should be rock solid and you won't have to screw
> with it again.
>
> You could reduce that ~$355 USD to ~$155 by substituting something like
> two of these for the LSI HBA
>
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816124064
>
> though I can't attest to any qualities of this card. If they're decent
> these will work better direct connected to the drives than any eSATA
> card connected to those Vantecs.
OK. They're not available locally but I'll consider something similar
if I can find a suitable case to put them in.
> I hope the advice I've given here is beneficial, and not too preachy.
It has been, and not at all. I appreciate you taking the time to
help. I'll keep looking to see if I can find the necessaries for an
in-case solution, within budget.
Thanks.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-08-06 19:37 ` Stan Hoeppner
2013-08-07 2:22 ` P Orrifolius
@ 2013-08-07 7:34 ` P Orrifolius
2013-08-08 23:09 ` Stan Hoeppner
1 sibling, 1 reply; 13+ messages in thread
From: P Orrifolius @ 2013-08-07 7:34 UTC (permalink / raw)
To: stan; +Cc: linux-raid
On 7 August 2013 07:37, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> It makes more sense to spend 50% more, $75, for the native 8 port LSI
> enterprise SAS/SATA HBA for guaranteed stability/reliability and the
> available 4GB/s aggregate drive bandwidth.
Do you have any opinion on the suitability of a
FUJITSU MEGARAID LSI 2008 DUAL MINI SAS D2607-a21
similar to those listed at
http://www.ebay.com/itm/181152217999?_trksid=p2048036#shId ?
Looks pretty similar to me... maybe identical if I'm not using any
hardware raid capabilities? Possibly can be got for half the cost of
the LSI 9211-8i.
Case-wise a http://www.bitfenix.com/global/en/products/chassis/shinobi
looks the best/cheapest option availble to me. Hopefully the 2 front
fans will cool the main array enough, and I can jury-rig another fan
for the extra drives mounted in the 5.25" bays.
Thanks
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-08-07 2:22 ` P Orrifolius
@ 2013-08-08 20:17 ` Stan Hoeppner
0 siblings, 0 replies; 13+ messages in thread
From: Stan Hoeppner @ 2013-08-08 20:17 UTC (permalink / raw)
To: P Orrifolius; +Cc: linux-raid
On 8/6/2013 9:22 PM, P Orrifolius wrote:
> Thanks for taking the time to make these recommendations, I will
> certainly reassess that style of solution.
You bet.
> Just to give a little economic context, so as not to appear heedless
> of your advice, the relative prices of things are a bit different
> here.
> A Vantec costs about USD125, the LSI about USD360.
You're probably looking at the retail "KIT". Note I previously
specified the OEM model and separate cables because it's significantly
cheaper, by about $60, roughly 20%, here anyway.
> Anything vaguely
> enterprise-y or unusual tends to attract a larger markup. And if you
> take purchasing power into account that LSI probably equates to about
> USD650 of beer/groceries... not directly relevant but it does mean the
> incremental cost is harder to bear.
Widen your search to every online seller in the EU and I'd think you can
find what you want at a price you can afford. Worth noting, nearly
every piece of personal electronics I buy comes from Newegg which ships
from 3 locations to here.
Los Angeles 2550 km
Newark 2027 km
Memphis 1020 km
In essence from Bremen, Brussels, and Bordeaux, to Lisbon. I'm buying
from a company 5 states away, equivalent to 3 or 4 *countries* away in
Europe.
> All that said data loss is grim and, by the sounds of your email, just
> switching to eSATA won't help me much in terms of reliability.
I simply would not take a chance on the Vantecs long term. Even if the
eSATA and PMP work reliably, the transformer and airflow are very weak
links. If a transformer fails you lose 4 drives as you just did and you
have to wait on a replacement or find a suitable unit at the EU
equivalent of Radio Shack. The poor airflow will cause 1 or even 2
drives to fail possibly simultaneously.
The simply question is: Are your data, the cost of replacement drives,
and the rebuild hassle, worth the risk?
> Putting cost aside, and the convenience of having occasionally just
> taken the two enclosures elsewhere for a few days, my biggest problem
> is an adequately sized and ventilated case.
Such chassis abound on this side of the pond. Surely there are almost
as many on that side. Try to locate one of these Chenbro SR112 server
chassis w/fixed drive cage option, to reduce cost vs hot swap. It'll
hold your 10 drives and cool them very well. This chassis has far
better cooling than any stock consumer case at any price--3x92mm quality
PWM hot swap fans mid chassis. Their Euro website only shows a model
with hot swap drive bays but you should be able to find a distributor
carrying the fixed bay model.
http://usa.chenbro.com/corporatesite/products_detail.php?sku=201
http://www.chenbro.eu/corporatesite/products_detail.php?sku=162
Chenbro is in the top 3 of channel chassis makers. I've had one of
their SR103 dual chamber "cube" chassis, in the rare two tone blue
finish, for 13 years now. I don't have any pics of mine but I found
this one with tons of ugly stickers:
http://coolenjoy.net/bbs/data/gallery/%EC%82%AC%EC%A7%84120408_007.jpg
All angle shots of a black model showing the layout & features. Looks
like he did some cutting on the back panel. There were a couple of
panel knock outs there for VHDCI SCSI cables.
http://www.2cpu.co.kr/data/file/sell/1893365098_c17d5ebf_EB84B7EC849CEBB284.jpg
This is the best pedestal chassis I've ever owned, by far, maybe the
best period. Then again it should be given the $300 I paid for it. Top
shelf quality. If you can find one of these old SR103s for sale it
would be nearly perfect. Or the SR102 or 101 for that matter, both of
which are progressively larger with more bays.
With 10 drives you'd want to acquire something like 3 of these
http://www.newegg.com/Product/Product.aspx?Item=N82E16817995073
for your 8 data array drives w/room for one more. This leaves 2x 5.25"
bays for optical/tape/etc drives, and two 3.5" front access bays for
floppy, flash card reader, dual hot swap 2.5" cages for SSDs, etc.
There are four internal 3.5" bays in the back directly in front of the
120mm exhaust fan port for your 2 boot drives. As I said, if you can
find one of these used all your problems would be solved, and cost
effectively.
> I know from having 6 drives in 6 bays with a single
> 120mm HDD cage fan in my existing case that some drives get very hot.
This is what happens when designers and consumers trend toward quiet
over cooling power. Your 120 probably has a free air rated output of
~40 CFM, 800-1200 RPM, static pressure of ~.03" H2O. Due to turbulence
through the drive cage and the ultra low pressure it's probably only
moving ~20 CFM or less over the drives. The overall chassis airflow is
probably insufficient, and turbulent, so heat isn't evacuated at a
sufficient rate, causing Tcase to rise, making the cage fan even less
useful. Each of the 6 drives are likely receiving
20/6 = ~3.3 CFM or less from the cage fan.
For comparison, a single NMB 120x38, model FBA12G 12H, spins 2500 RPM,
moves 103 CFM, .26" H2O static pressure, 41.5 dB SPL.
http://www.nmbtc.com/pdf/dcfans/fba12g.pdf
It's ~4 times louder than your 120. However, it moves ~5x more air at
~8.7x higher static pressure. The high static pressure allows this fan
to move air at rated flow even with the high resistance of a dense pack
of disk drives. This one fan mounted on a chassis back panel is more
than capable of cooling any quality loaded dual socket server and/or
JBOD chassis containing 16 current drives of any brand, RPM, or power
dissipation.
This fan can provide ~6.4 CFM to each of 16 drives, ~2x that of your 120
across 6 drives. And again this is one fan cooling the entire chassis.
No other fans are required. Many cases using the low noise fans employ
5-6 of them to get adequate aggregate flow. Each additional unit adds 3
dB of SPL. Using 6 of 25 dB yields a total SPL of 40 dB. The human ear
can't discern differences of less than 2-3 dB SPL, so the total "noise"
output of the 6 cheap/quiet fan based system and the single high
quality, high pressure, high flow fan system is basically identical.
The cheap quiet LED fans are ~$5 each. This NMB runs ~$22. To employ
5-6 of the cheap 120mm fans requires mounting them on multiple exterior
panels in a manner that generates flow in at least 3 directions. The
intersection of these flows creates turbulence, further decreasing their
performance.
This is why every quality server chassis you'll find has four sides
buttoned up, an open front, and fans only at the rear and/or in the
middle. No side intake vent for CPUs, PCIe slots, etc. Front to back
airflow only. You don't find many well designed PC chassis because most
buyers are uneducated, and simply think "more fans is better, dude!".
This is also why cases with those ugly 8" side intake fans sell like hot
cakes. The flow is horrible, but, "Dude, look how huge my fan is!".
> Anyway, thanks for prompting the reassessment.
In "life 1.25" or so I was a systems designer at a shop specializing in
custom servers and storage arrays. One of my areas of expertise is
chassis thermal management. Hardware is in my blood. I like to pass
some of that knowledge and experience when the opportunity arises. Note
my email address. That's my 'vanity' domain of 10+ years now. ;)
> I was imprecise... xfs_repair -n exited with 0 for each filesystem, so
> that was a good result. 'Seemingly' because I didn't believe that it
> had actually checked 8TB of data in the time it took... as you say the
> data may still be mangled.
xfs_repair checks only metadata and so in parallel using all CPUs/cores,
which is why it's so fast on seemingly large filesystems. It uses tons
of memory with large filesystems as well--large in this context meaning
lots of inodes, not lots of very large files. It doesn't look at files
at all. So it didn't check your 8TB of data, only the few hundred MB or
so of directory metadata, superblocks, etc.
> OK. They're not available locally but I'll consider something similar
> if I can find a suitable case to put them in.
I'd guess you're not going to find much of this stuff 'locally', even
without knowing where 'local' is.
> It has been, and not at all. I appreciate you taking the time to
> help. I'll keep looking to see if I can find the necessaries for an
> in-case solution, within budget.
As you can see above I'm trying to help you get there, and maybe
providing a little knowledge that you might use only 2-3 times in your
life. But it's nice to have it for those few occasions. :)
> Thanks.
You're welcome.
--
Stan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-08-07 7:34 ` P Orrifolius
@ 2013-08-08 23:09 ` Stan Hoeppner
2013-08-10 8:44 ` P Orrifolius
0 siblings, 1 reply; 13+ messages in thread
From: Stan Hoeppner @ 2013-08-08 23:09 UTC (permalink / raw)
To: P Orrifolius; +Cc: linux-raid
On 8/7/2013 2:34 AM, P Orrifolius wrote:
> On 7 August 2013 07:37, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>> It makes more sense to spend 50% more, $75, for the native 8 port LSI
>> enterprise SAS/SATA HBA for guaranteed stability/reliability and the
>> available 4GB/s aggregate drive bandwidth.
>
> Do you have any opinion on the suitability of a
>
> FUJITSU MEGARAID LSI 2008 DUAL MINI SAS D2607-a21
>
> similar to those listed at
> http://www.ebay.com/itm/181152217999?_trksid=p2048036#shId ?
>
> Looks pretty similar to me... maybe identical if I'm not using any
> hardware raid capabilities? Possibly can be got for half the cost of
> the LSI 9211-8i.
I have no experience with the Fujitsu models. I can tell you that this
one is not an OEM LSI 9211-8i, but looks close at first glance. It's
Fujitsu's own design. It may use custom Fujitsu firmware instead of
LSI's which means you may not be able to flash standard LSI firmware
onto it if it needs it. It mentions nothing of HBA mode, only RAID
mode. Too many unknowns, and not enough documentation available. I'd
steer clear.
This is probably your best current option, at 1/2 the Fujitsu price:
http://www.ebay.com/itm/Intel-SASUC8I-SAS-SATA-3Gb-s-PCIe-x8-Low-Profile-RAID-Controller-Refurbished/181108158926?_trksid=p2045573.m2042&_trkparms=aid%3D111000%26algo%3DREC.CURRENT%26ao%3D1%26asc%3D27%26meid%3D431256662585357660%26pid%3D100033%26prg%3D1011%26rk%3D2%26rkt%3D4%26sd%3D171093850996%26
This 3Gb/s Intel is/was a very popular LSISAS1068e based board, takes
standard LSI firmware. Decent performer with md. Discontinued, as the
SAS1068 series chips were discontinued. The 9211 series (SAS2008 chip)
supersedes it. Newegg was selling this exact Intel board new up to a
few months ago for $149. The 1068 chip is limited to max individual
drive capacity of 2TB. So you'll peak at 12TB net w/md RAID6 and 8x2TB
drives. Hard to beat at $75 USD. Uses same cables I listed previously.
In NH, USA, ships worldwide. Seller has 99.8% on 17k+ sales. If in
your shoes this is the one I'd buy.
Here's an OEM Dell that I'm pretty sure is the 9240-8i and it already
has the IT firmware, which is what you'd want. Better card than the
Intel above, but this sale ships to USA only apparently.
http://www.ebay.com/itm/NEW-Dell-SAS-SATA-2-0-6Gb-s-8-Port-2x4-PCI-e-HBA-LSISAS2008-IT-/171093850996?pt=US_Server_Disk_Controllers_RAID_Cards&hash=item27d5fcfb74
> Case-wise a http://www.bitfenix.com/global/en/products/chassis/shinobi
> looks the best/cheapest option availble to me. Hopefully the 2 front
> fans will cool the main array enough, and I can jury-rig another fan
> for the extra drives mounted in the 5.25" bays.
I didn't recommend the Shinobi because it has restricted and unbalanced
airflow characteristics. The 2 front air intake slits are too small to
optimally cool 8 drives in the cage without significant pressure,
requiring many, or stronger, fans to do the job. It has large intakes
on the chassis floor for mounting fans, which when left open will
disrupt front intake and rear exhaust flows. It can mount two front
fans and top fans but they are not included. After buying these fans
its cost is higher than the Enermax. But since the Enermax and others
are not available to you, if you go with this Shinobi chassis, I
recommend the following:
1. Affix strips of 2" wide or wider clear box packing tape, or full
sheet letter size contact paper, to the inside of both the bottom and
top intake grills. This will seal them and prevent airflow, and will be
invisible from the outside. I've done this many times, works great.
This forces all intake air to come through the drive cages and out the
rear fans.
2. Use a PSU with a 120mm fan.
3. Toss out the included el cheapo rear 120mm fan and replace it with
one of these, or a similar high quality model with 80-100 CFM flow rate
and ~40 dB SPL:
http://www.frozencpu.com/products/5642/fan-285/Panaflo_H1A_120mm_Hi-Speed_Fan_BX_w_RPM_Sensor_FBA12G12H1BX.html?tl=g36c15s562
http://www.frozencpu.com/products/17229/fan-1059/Sanyo_Denki_120mm_x_38mm_High-Speed_Fan_-_1025_CFM_109R1212H1011.html?tl=g36c15s562
http://www.frozencpu.com/products/15715/fan-986/Delta_120mm_x_25mm_Fan_-_827_CFM_AFB1212H-R00_Bare_Wire.html?tl=g36c15s60
With one of these and a PSU fan you won't need any fans in front cage
locations. If you find it's too loud get a generic self adhesive foam
damping kick. They're quite effective. If still too loud, use a fan
controller, but don't turn it down to far.
--
Stan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-08-08 23:09 ` Stan Hoeppner
@ 2013-08-10 8:44 ` P Orrifolius
2013-08-10 14:02 ` Stan Hoeppner
2013-09-15 23:34 ` P Orrifolius
0 siblings, 2 replies; 13+ messages in thread
From: P Orrifolius @ 2013-08-10 8:44 UTC (permalink / raw)
To: stan; +Cc: linux-raid
On 9 August 2013 08:17, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> On 8/6/2013 9:22 PM, P Orrifolius wrote:
>
>> Just to give a little economic context, so as not to appear
>> heedless of your advice, the relative prices of things are a bit
>> different here. A Vantec costs about USD125, the LSI about USD360.
>
> You're probably looking at the retail "KIT". Note I previously
> specified the OEM model and separate cables because it's
> significantly cheaper, by about $60, roughly 20%, here anyway.
No, definitely the bare card (LSI00194). The kit (LSI00195) is about
30% more.
>
>> Anything vaguely enterprise-y or unusual tends to attract a larger
>> markup. And if you take purchasing power into account that LSI
>> probably equates to about USD650 of beer/groceries... not directly
>> relevant but it does mean the incremental cost is harder to bear.
>
> Widen your search to every online seller in the EU and I'd think you
> can find what you want at a price you can afford. Worth noting,
> nearly every piece of personal electronics I buy comes from Newegg
> which ships from 3 locations to here.
Actually the other side, I'm in NZ. It's not just the tyranny of
physical distance, I'm probably closer than almost everyone in the USA
to the producers of this stuff, it's also market scale. And an
effective Customs enforcement.
>> Putting cost aside, and the convenience of having occasionally just
>> taken the two enclosures elsewhere for a few days, my biggest
>> problem is an adequately sized and ventilated case.
>
> Such chassis abound on this side of the pond. Surely there are
> almost as many on that side.
It's hard to get a good case here, at all or at a price close to eg
Newegg. It's mostly just 'pimped out' mid towers. And the weight of
the item makes one-off imports prohibitive.
> Try to locate one of these Chenbro SR112 server chassis w/fixed
> drive cage option,
Pretty hard to source Chenbro cases here, could get a SR107 shipped in
for about 3x the cost of the Shinobi.
> With 10 drives you'd want to acquire something like 3 of these
> http://www.newegg.com/Product/Product.aspx?Item=N82E16817995073 for
> your 8 data array drives w/room for one more.
Even plain drive cages are few and far between. About USD38 for a
http://www.lian-li.com/en/dt_portfolio/ex-36a2/ or USD30 for a
http://www.thermaltake.com/products-model.aspx?id=C_00000235
Sticks in my craw to pay a third of the cost of the Shinobi case for one
of those.
There are lots of backplaned cages, for more money, but I'm excluding
those anyway on your warnings about cheap backplanes.
>> I know from having 6 drives in 6 bays with a single 120mm HDD cage
>> fan in my existing case that some drives get very hot.
>
> This is what happens when designers and consumers trend toward quiet
> over cooling power. Your 120 probably has a free air rated output
> of ~40 CFM, 800-1200 RPM, static pressure of ~.03" H2O. Due to
> turbulence through the drive cage and the ultra low pressure it's
> probably only moving ~20 CFM or less over the drives. The overall
> chassis airflow is probably insufficient, and turbulent, so heat
> isn't evacuated at a sufficient rate, causing Tcase to rise, making
> the cage fan even less useful. Each of the 6 drives are likely
> receiving
>
> 20/6 = ~3.3 CFM or less from the cage fan.
>
> For comparison, a single NMB 120x38, model FBA12G 12H, spins 2500
> RPM, moves 103 CFM, .26" H2O static pressure, 41.5 dB SPL.
>
> http://www.nmbtc.com/pdf/dcfans/fba12g.pdf
>
> It's ~4 times louder than your 120. However, it moves ~5x more air
> at ~8.7x higher static pressure.
[snip]
> This is why every quality server chassis you'll find has four sides
> buttoned up, an open front, and fans only at the rear and/or in the
> middle. No side intake vent for CPUs, PCIe slots, etc. Front to
> back airflow only. You don't find many well designed PC chassis
> because most buyers are uneducated, and simply think "more fans is
> better, dude!". This is also why cases with those ugly 8" side
> intake fans sell like hot cakes. The flow is horrible, but, "Dude,
> look how huge my fan is!".
And, in a small market, results in good cases being unobtainable/expensive.
I'm with you on airflow... my current case has removed side fans,
cardboard covering fan holes, duct tape over perforations and the front
plastic hacksawed out to open up where they'd only put 4 or 5 tiny
slits. I'm not too picky about the ascetics.
>> Do you have any opinion on the suitability of a FUJITSU MEGARAID
>> LSI 2008 DUAL MINI SAS D2607-a21
> I have no experience with the Fujitsu models.
[snip]
> Too many unknowns, and not enough documentation available. I'd
> steer clear.
Ok.
>
> This is probably your best current option, at 1/2 the Fujitsu price:
>
> This 3Gb/s Intel is/was a very popular LSISAS1068e based board, takes
> standard LSI firmware.
> The 1068 chip is limited to max individual drive capacity of 2TB.
>
> Here's an OEM Dell that I'm pretty sure is the 9240-8i and it already
> has the IT firmware, which is what you'd want. Better card than the
> Intel above, but this sale ships to USA only apparently.
How does the 9240 differ from the 9211? The only obvious difference
that I can see is that the 9240 supports _less_ connected drives
(unraided). And yet it's 20-25% more expensive here retail.
The number of devices connected won't be a problem but the 2TB limit may
force me into an upgrade earlier.
For these lightweight items it will be feasible for me to use an
on-shipping service, as long as they're under a threshold for attracting
additional Customs charges... which these cards will be.
So I'll keep them in mind.
> I didn't recommend the Shinobi because it has restricted and
> unbalanced airflow characteristics. The 2 front air intake slits
> are too small to optimally cool 8 drives in the cage without
> significant pressure
Hmmmm... I was going to just leave the front panel off on the assumption
that there are dust filters on the fan intakes, and that the panel can
actually be removed. Neither of which may be true in which case those
little slits would be a serious problem.
And, as you suggest, I was going to block up all the other inlets.
And I'll take a look at better fans.
Thanks.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-08-10 8:44 ` P Orrifolius
@ 2013-08-10 14:02 ` Stan Hoeppner
2013-09-15 23:34 ` P Orrifolius
1 sibling, 0 replies; 13+ messages in thread
From: Stan Hoeppner @ 2013-08-10 14:02 UTC (permalink / raw)
To: P Orrifolius; +Cc: linux-raid
On 8/10/2013 3:44 AM, P Orrifolius wrote:
...
> No, definitely the bare card (LSI00194). The kit (LSI00195) is about
> 30% more.
...
> Actually the other side, I'm in NZ. It's not just the tyranny of
> physical distance, I'm probably closer than almost everyone in the USA
> to the producers of this stuff, it's also market scale. And an
> effective Customs enforcement.
My bad, assumed you were in Europe for some reason. And dang, PC
components -are- expensive down there.
...
> It's hard to get a good case here, at all or at a price close to eg
The Shinobi isn't horrible. And you'll see below I found something
that'll make it work well, from a thermal standpoint anyway. ;)
...
> How does the 9240 differ from the 9211? The only obvious difference
> that I can see is that the 9240 supports _less_ connected drives
> (unraided). And yet it's 20-25% more expensive here retail.
The 9240 has hardware RAID5/50 in addition to all modes of the 9211.
That's the difference in price. The RAID5 performance is abysmal though.
> The number of devices connected won't be a problem but the 2TB limit may
> force me into an upgrade earlier.
The 9211/9240 have the 2008 chip and both support >2TB drives. It's the
older cards with the 1068 chip that are limited to 2TB drives.
...
> Hmmmm... I was going to just leave the front panel off on the assumption
> that there are dust filters on the fan intakes, and that the panel can
No need to leave the front panel off the Shinobi.
> actually be removed. Neither of which may be true in which case those
> little slits would be a serious problem.
With cheap low speed/pressure fans the intakes don't look sufficient.
With the NMB I've mentioned or similar high pressure fans you'd be fine.
It simply takes more pressure, thus RPM, thus noise, to flow sufficient
air to the drives through smaller intake openings. Again, see below.
> And, as you suggest, I was going to block up all the other inlets.
>
> And I'll take a look at better fans.
Surplus stores are great places to find good quality fans at low prices,
both DC and AC. For instance, in the US:
http://www.surpluscenter.com/item.asp?item=16-1445&catname=electric
Wow! I hadn't checked their inventory in a while. $5 for San Ace
120x38 102CFM .26" H2O. 2500+ units in stock. Specs sound familiar?
They're identical to the NMB I've mentioned in detail several times.
Every bit as good as the NMB, some would say better. Intel has shipped
only Sanyo Denki fans in their products (CPU coolers, chassis, etc) for
20+ years. Surplus Center is an hour drive from here. Next time I'm in
town I gotta pick up a half dozen of these. Price for the same unit at
Frozen CPU is $22.99:
http://www.frozencpu.com/products/17230/fan-1060/Sanyo_Denki_120mm_x_38mm_High-Speed_Fan_-_2600_RPM_109R1212H1071.html?tl=c15s562b113
If you recall from my previous post, a single one of these 102 CFM high
pressure fans mounted in the back of that Shinobi should easily cool
everything if run at full RPM. Just in case buy 3, and if the drives
are toasty with just the one, add the other two up front. International
shipping should be little more for 3 than 1. Or maybe you can find a
place in New Zealand or Australia for better overall price. I have no
idea what shipping cost on a ~3 lb small box is from Nebraska USA is to
your location. I can tell you that 3 of these 120x38 San Ace fans for
$15 USD is a steal.
--
Stan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-08-10 8:44 ` P Orrifolius
2013-08-10 14:02 ` Stan Hoeppner
@ 2013-09-15 23:34 ` P Orrifolius
2013-09-16 19:42 ` Stan Hoeppner
1 sibling, 1 reply; 13+ messages in thread
From: P Orrifolius @ 2013-09-15 23:34 UTC (permalink / raw)
To: stan; +Cc: linux-raid
On 10 August 2013 20:44, P Orrifolius <porrifolius@gmail.com> wrote:
> On 9 August 2013 08:17, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> No, definitely the bare card (LSI00194).
So finally after a deal of trouble with local suppliers of the HBA and
eventually getting it shipped in from Amazon USA I've got all the
pieces. Which is good.
I've got it all put together and the system booting up and running as
it was before (md array inconsistent/unassembled) without using the
HBA. Unfortunately I'm not yet sure the HBA is actually going to work
with my motherboard... but I'll save that for another post.
Anyway, thanks for your advice and recommendations.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-09-15 23:34 ` P Orrifolius
@ 2013-09-16 19:42 ` Stan Hoeppner
2013-09-16 20:46 ` P Orrifolius
0 siblings, 1 reply; 13+ messages in thread
From: Stan Hoeppner @ 2013-09-16 19:42 UTC (permalink / raw)
To: P Orrifolius; +Cc: linux-raid
On 9/15/2013 6:34 PM, P Orrifolius wrote:
> On 10 August 2013 20:44, P Orrifolius <porrifolius@gmail.com> wrote:
>> On 9 August 2013 08:17, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>
>> No, definitely the bare card (LSI00194).
>
> So finally after a deal of trouble with local suppliers of the HBA and
> eventually getting it shipped in from Amazon USA I've got all the
> pieces. Which is good.
>
> I've got it all put together and the system booting up and running as
> it was before (md array inconsistent/unassembled) without using the
> HBA. Unfortunately I'm not yet sure the HBA is actually going to work
> with my motherboard... but I'll save that for another post.
As long as you have a free x8 or x16 slot (which you obviously should
have determined before ordering) and it's not one of the few mobos that
automatically disables the onboard video when you populate the x16 slot,
then it should work fine. If the latter, used PCI vid cards are cheap,
assuming you have a free PCI slot.
> Anyway, thanks for your advice and recommendations.
You're welcome. I hope it works for you, after all the trouble
acquiring it.
--
Stan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Advice for recovering array containing LUKS encrypted LVM volumes
2013-09-16 19:42 ` Stan Hoeppner
@ 2013-09-16 20:46 ` P Orrifolius
0 siblings, 0 replies; 13+ messages in thread
From: P Orrifolius @ 2013-09-16 20:46 UTC (permalink / raw)
To: stan; +Cc: linux-raid
On 17 September 2013 07:42, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> On 9/15/2013 6:34 PM, P Orrifolius wrote:
>> On 10 August 2013 20:44, P Orrifolius <porrifolius@gmail.com> wrote:
>> Unfortunately I'm not yet sure the HBA is actually going to work
>> with my motherboard... but I'll save that for another post.
>
> As long as you have a free x8 or x16 slot (which you obviously should
> have determined before ordering) and it's not one of the few mobos that
> automatically disables the onboard video when you populate the x16 slot,
> then it should work fine. If the latter, used PCI vid cards are cheap,
> assuming you have a free PCI slot.
Yeah, I've two x16 slots (and no video cards) and the onboard video
works when either is populated. Unfortunately the HBA is reporting
that it has an invalid PCI slot.
I've sent an email about it to this list titled "LSI 9211 (SAS2008)
BIOS problems, invalid PCI slot"
Hopefully I'll get it sorted out in the end.
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2013-09-16 20:46 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-08-04 5:49 Advice for recovering array containing LUKS encrypted LVM volumes P Orrifolius
2013-08-04 13:09 ` Stan Hoeppner
2013-08-06 1:54 ` P Orrifolius
2013-08-06 19:37 ` Stan Hoeppner
2013-08-07 2:22 ` P Orrifolius
2013-08-08 20:17 ` Stan Hoeppner
2013-08-07 7:34 ` P Orrifolius
2013-08-08 23:09 ` Stan Hoeppner
2013-08-10 8:44 ` P Orrifolius
2013-08-10 14:02 ` Stan Hoeppner
2013-09-15 23:34 ` P Orrifolius
2013-09-16 19:42 ` Stan Hoeppner
2013-09-16 20:46 ` P Orrifolius
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).