From: Emmanuel Florac <eflorac@intellique.com>
To: Eli Morris <ermorris@ucsc.edu>
Cc: xfs@oss.sgi.com
Subject: Re: filesystem shrinks after using xfs_repair
Date: Sun, 11 Jul 2010 18:29:41 +0200 [thread overview]
Message-ID: <20100711182941.1a3d4200@galadriel.home> (raw)
In-Reply-To: <84CF7106-D080-46F0-945B-5BC0DC7DBBE1@ucsc.edu>
Le Sat, 10 Jul 2010 23:32:57 -0700 vous écriviez:
> I got some automated emails this Sunday about I/O errors coming from
> the computer
That smells like a hardware problem. What type of RAID is this?
RAID-5, RAID-10, RAID-6? are there any alarms from the RAID controller?
Can you test the SMART status of the drives? What are the JBODs, are
these dell MD-1000?
> One one of the
> physical volumes (PVs) - on /dev/sdc1, I noticed when I ran
> pvdisplay that of the 12.75 TB comprising the volume, 12.00! TB was
> being shown as 'not usable'.
Smells more like a hardware problem. Check all your systems logs for IO
errors and errors coming from the sas driver. Are you using mptsas or
megaraid driver? Grep the logs with the driver name to check for any
message (time outs, IO errors, etc).
> thinking it might find the missing
> data. Instead the filesystem decreased back to 51 TB. I rebooted and
> tried again a couple of times and the same thing happened. I'd
> really, really like to get that data back somehow and also to get the
> filesystem to where we can start using it again.
Check the dmesg output right after the xfs_repair. My bet : there is an
IO error (bad cable? hosed drive?) (message from the controller), the PV
is failed (message from LVM), then xfs_repair does what it must do : it
truncates the filesystem to the size of the underlying device.
Unfortunately the data may still be on the drives, but a tool like
photorec is probably your only chance to get it back from the raw
drives. Metadata, filenames, directory hierarchies are almost certainly
gone once and for all.
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac@intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2010-07-11 16:27 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-07-11 6:32 filesystem shrinks after using xfs_repair Eli Morris
2010-07-11 10:56 ` Stan Hoeppner
2010-07-11 16:29 ` Emmanuel Florac [this message]
-- strict thread matches above, loose matches on Subject: below --
2010-07-12 6:39 Eli Morris
2010-07-12 1:10 Eli Morris
2010-07-12 2:24 ` Stan Hoeppner
2010-07-12 11:47 ` Emmanuel Florac
2010-07-23 8:30 ` Eli Morris
2010-07-23 10:23 ` Emmanuel Florac
2010-07-23 16:36 ` Eli Morris
2010-07-24 0:54 ` Dave Chinner
2010-07-24 1:08 ` Eli Morris
2010-07-24 2:39 ` Dave Chinner
2010-07-26 3:20 ` Eli Morris
2010-07-26 3:45 ` Dave Chinner
2010-07-26 4:04 ` Eli Morris
2010-07-26 5:57 ` Michael Monnerie
2010-07-26 6:06 ` Dave Chinner
2010-07-26 6:46 ` Eli Morris
2010-07-26 8:40 ` Michael Monnerie
2010-07-26 9:49 ` Emmanuel Florac
2010-07-26 17:22 ` Eli Morris
2010-07-26 18:33 ` Stuart Rowan
2010-07-26 21:06 ` Emmanuel Florac
2010-07-27 5:02 ` Eli Morris
2010-07-27 6:48 ` Stan Hoeppner
2010-07-27 8:21 ` Michael Monnerie
2010-07-26 10:20 ` Dave Chinner
2010-07-28 5:12 ` Eli Morris
2010-07-29 19:22 ` Eli Morris
2010-07-29 22:09 ` Emmanuel Florac
2010-07-29 22:48 ` Eli Morris
2010-07-29 23:01 ` Dave Chinner
2010-07-29 23:15 ` Eli Morris
2010-07-30 0:39 ` Michael Monnerie
2010-07-30 1:49 ` Eli Morris
2010-07-30 7:15 ` Emmanuel Florac
2010-07-30 7:57 ` Christoph Hellwig
2010-07-30 10:23 ` Michael Monnerie
2010-07-30 10:29 ` Christoph Hellwig
2010-07-30 12:40 ` Michael Monnerie
2010-07-30 13:17 ` Emmanuel Florac
2010-07-09 23:07 Eli Morris
2010-07-10 8:16 ` Stan Hoeppner
2010-07-24 21:09 ` Eric Sandeen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100711182941.1a3d4200@galadriel.home \
--to=eflorac@intellique.com \
--cc=ermorris@ucsc.edu \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox