public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* filesystem shrinks after using xfs_repair
@ 2010-07-09 23:07 Eli Morris
  2010-07-10  8:16 ` Stan Hoeppner
  2010-07-24 21:09 ` Eric Sandeen
  0 siblings, 2 replies; 45+ messages in thread
From: Eli Morris @ 2010-07-09 23:07 UTC (permalink / raw)
  To: xfs

Hi All,

I've got this problem where if I run xfs_repair, my filesystem shrinks by 11 TB, from a volume size of 62 TB to 51 TB. I can grow the filesystem again with xfs_growfs, but then rerunning xfs_repair shrinks it back down again. The first time this happened was a few days ago and running xfs_repair took about 7 TB of data with it. That is, out of the 11 TB of disk space that vanished, 7 TB had data on it, and 4 TB was empty space. XFS is running on top of an LVM volume. It's on an Intel/Linux system running Centos 5 (2.6.18-128.1.14.el5). Does anyone have an idea on what would cause such a thing and what I might try to keep it from continuing to happen. I could just never run xfs_repair again, but that doesn't seem like a good thing to count on. Major bonus points if anyone has any ideas on how to get my 7 TB of data back also. It must be there somewhere and it would be very bad to lose.

thanks for any help and ideas. I'm just stumped right now.

Eli
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 45+ messages in thread
* Re: filesystem shrinks after using xfs_repair
@ 2010-07-11  6:32 Eli Morris
  2010-07-11 10:56 ` Stan Hoeppner
  2010-07-11 16:29 ` Emmanuel Florac
  0 siblings, 2 replies; 45+ messages in thread
From: Eli Morris @ 2010-07-11  6:32 UTC (permalink / raw)
  To: xfs

> Eli Morris put forth on 7/9/2010 6:07 PM:
>> Hi All,
>> 
>> I've got this problem where if I run xfs_repair, my filesystem shrinks by 11 TB, from a volume size of 62 TB to 51 TB. I can grow the filesystem again with xfs_growfs, but then rerunning xfs_repair shrinks it back down again. The first time this happened was a few days ago and running xfs_repair took about 7 TB of data with it. That is, out of the 11 TB of disk space that vanished, 7 TB had data on it, and 4 TB was empty space. XFS is running on top of an LVM volume. It's on an Intel/Linux system running Centos 5 (2.6.18-128.1.14.el5). Does anyone have an idea on what would cause such a thing and what I might try to keep it from continuing to happen. I could just never run xfs_repair again, but that doesn't seem like a good thing to count on. Major bonus points if anyone has any ideas on how to get my 7 TB of data back also. It must be there somewhere and it would be very bad to lose.
>> 
>> thanks for any help and ideas. I'm just stumped right now.
> 
> It may be helpful if you can provide more history (how long has this been
> happening, recent upgrade?), the exact xfs_repair command line used, why you
> were running xfs_repair in the first place, hardware or software RAID, what
> xfsprogs version, relevant log snippets, etc.

Hi Stan,

Thanks for responding. Sure, I'll try and give more information.

I got some automated emails this Sunday about I/O errors coming from the computer (which is a Dell Poweredge 2950 w/ a connected 16 bay hardware RAID which is connected itself to 4 16 bay JBODs. The RAID controller is connected via SAS / LSI Fusion card to the Poweredge - Nimbus). It was Sunday, so I just logged in, rebooted, ran xfs_repair, then mounted the filesystem back. I tried a quick little write test, just to make sure I could write a file to it and read it back and called it a day until work the next day. When I came into work, I looked at the volume more closely and noticed that the filesystem shrank as I stated. Each of the RAID/JBODs is configured as a separate device and represents one physical volume in my LVM2 scheme, and those physical volumes are then combined into one logical volume. Then the filesystem sits on top of this. One one of the physical volumes  (PVs) - on /dev/sdc1, I noticed when I ran pvdisplay that of the 12.75 TB comprising the volume, 12.00!
  TB was being shown as 'not usable'. Usually this number is a couple of megabytes. So, after staring at this a while, I ran pvresize on that PV. The volume then listed 12.75 as usable, with a couple of megabytes not usable as one would expect. I then gave the command xfs_growfs on my filesystem and once again the file system was back to 62 TB. But it was showing the increased space as free space, instead of only 4.x TB of it as free as before all this happened. I then ran xfs_repair on this again, thinking it might find the missing data. Instead the filesystem decreased back to 51 TB. I rebooted and tried again a couple of times and the same thing happened. I'd really, really like to get that data back somehow and also to get the filesystem to where we can start using it again.

Version 2.9.4 of xfsprogs. xfs_repair line used 'xfs_repair /dev/vg1/vol5', vol5 being the LVM2 logical volume.  I spoke with tech support from my RAID vendor and he said he did not see any sign of errors with the RAID itself for what that is worth.

Nimbus is the hostname of the computer that is connected to the RAID/JBODs unit. The other computers (compute-0-XX) are only connected via NFS to the RAID/JBODs.

I've tried to provide a lot here,  but if I can provide any more information, please let me know. Thanks very much,

Eli

I'm trying to post logs, but my emails keep getting bounced. I'll see if this one makes it.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 45+ messages in thread
* Re: filesystem shrinks after using xfs_repair
@ 2010-07-12  1:10 Eli Morris
  2010-07-12  2:24 ` Stan Hoeppner
  2010-07-12 11:47 ` Emmanuel Florac
  0 siblings, 2 replies; 45+ messages in thread
From: Eli Morris @ 2010-07-12  1:10 UTC (permalink / raw)
  To: xfs

Hi guys,

Here are some of the log files from my XFS problem. Yes, I think this all started with a hardware failure of some sort. My storage is RAID 6, a an Astra SecureStor ES.


[root@nimbus log]# more messages.1 | grep I/O
Jul  2 17:02:30 nimbus kernel: sd 6:0:0:0: rejecting I/O to offline device
Jul  2 17:02:30 nimbus kernel: sd 6:0:0:0: rejecting I/O to offline device
Jul  2 17:02:30 nimbus kernel: sr 5:0:0:0: rejecting I/O to offline device
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687805082
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687826610
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687827634
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687828658
Jul  3 12:41:41 nimbus kernel: Buffer I/O error on device dm-0, logical block 2835978787
Jul  3 12:41:41 nimbus kernel: lost page write due to I/O error on dm-0
Jul  3 12:41:41 nimbus kernel: Buffer I/O error on device dm-0, logical block 2835978788
Jul  3 12:41:41 nimbus kernel: lost page write due to I/O error on dm-0
Jul  3 12:41:41 nimbus kernel: Buffer I/O error on device dm-0, logical block 2835978789
Jul  3 12:41:41 nimbus kernel: lost page write due to I/O error on dm-0
Jul  3 12:41:41 nimbus kernel: Buffer I/O error on device dm-0, logical block 2835978790
Jul  3 12:41:41 nimbus kernel: lost page write due to I/O error on dm-0
Jul  3 12:41:41 nimbus kernel: Buffer I/O error on device dm-0, logical block 2835978791
Jul  3 12:41:41 nimbus kernel: lost page write due to I/O error on dm-0
Jul  3 12:41:41 nimbus kernel: Buffer I/O error on device dm-0, logical block 2835978792
Jul  3 12:41:41 nimbus kernel: lost page write due to I/O error on dm-0
Jul  3 12:41:41 nimbus kernel: Buffer I/O error on device dm-0, logical block 2835978793
Jul  3 12:41:41 nimbus kernel: lost page write due to I/O error on dm-0
Jul  3 12:41:41 nimbus kernel: Buffer I/O error on device dm-0, logical block 2835978794
Jul  3 12:41:41 nimbus kernel: lost page write due to I/O error on dm-0
Jul  3 12:41:41 nimbus kernel: Buffer I/O error on device dm-0, logical block 2835978795
Jul  3 12:41:41 nimbus kernel: lost page write due to I/O error on dm-0
Jul  3 12:41:41 nimbus kernel: Buffer I/O error on device dm-0, logical block 2835978796
Jul  3 12:41:41 nimbus kernel: lost page write due to I/O error on dm-0
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687814314
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687815338
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687816362
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687817386

Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372371106
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372372130
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372373154
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372374178
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471976114
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471977138
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471978162
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471979186

Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471987386
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471988410
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471989434
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471990458
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471991482
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980261922
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262050
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372375202
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262050
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372376226
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687847114
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687848138
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262114
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980261986
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687849162
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687850186
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687851210
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262434
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687852234
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471991490
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262434
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687853258
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262178
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687854282
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687855306
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980261922
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262498
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262514
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262498
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262242
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262562
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262306
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262594
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262658
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262370
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262722
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262786
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262850
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687798938
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262914
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262978
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687797914
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980263042
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687797906
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980263106
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980263170
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687796882
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980263234
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980263298
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687795858
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980263362
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980263426
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687794834
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980263490
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687793810
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687792786
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687791762
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471975082
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471974058
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687790738
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687789714
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471973034
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471992514
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471993538
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471994562
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471995586
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471996610
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471997634
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471998658
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471999682
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471972010
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471970986
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471969962
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471968938
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471967914
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471966890
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471966882
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372343426
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471999690
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471965858
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12472000714
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12472003786
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12472001738
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12472002762
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12472004810
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471964834
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12472005834
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12472006858
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12472007882
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471963810
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471962786
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372342402
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471961762
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471960738
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471959714
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdd, sector 12471958690
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372330106
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687855314
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687856338
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687857362
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687858386
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687859410
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687860434
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372277834
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687861458
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687862482
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 22687863506
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372327026
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372324978
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372328058
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372376234
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372377258
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372378282
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372379306
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372380330
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372381354
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372382378
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372383402
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372384426
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372320882
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372327034
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372326002
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372329082
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372323954
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372341378
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372340354
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372339330
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372338306
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372337282
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372336258
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372335234
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372335226
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372334202
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372333178
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372332154
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdc, sector 24372331130
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980261922
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262114
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262178
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262242
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262306
Jul  3 12:41:41 nimbus kernel: end_request: I/O error, dev sdg, sector 1980262370
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 22687804058
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 159634
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 159634
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 159634
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 22687803034
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 159634
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 159634
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 159634
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 159634
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 22687802010
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 22687800986
Jul  3 12:41:42 nimbus kernel: end_request: I/O error, dev sdc, sector 22687799962
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x26df0       ("xfs_trans_read_buf") error 5 buf count 8192
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 12:41:42 nimbus kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x32ee86931       ("xlog_iodone") error 5 buf count 24576
Jul  3 12:41:42 nimbus kernel: Filesystem "dm-0": Log I/O Error Detected.  Shutting down filesystem: dm-0
Jul  3 12:41:42 nimbus kernel: scsi 1:0:0:0: rejecting I/O to dead device
Jul  3 13:42:57 nimbus kernel: serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
Jul  3 13:42:57 nimbus kernel: serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
Jul  3 13:42:57 nimbus kernel: 00:06: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
Jul  3 13:42:57 nimbus kernel: 00:07: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
[root@nimbus log]# 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 45+ messages in thread
* Re: filesystem shrinks after using xfs_repair
@ 2010-07-12  6:39 Eli Morris
  0 siblings, 0 replies; 45+ messages in thread
From: Eli Morris @ 2010-07-12  6:39 UTC (permalink / raw)
  To: xfs

Eli Morris put forth on 7/11/2010 8:10 PM:
>Hi guys,
>
>Here are some of the log files from my XFS problem. Yes, I think this all 
>started with a hardware failure of some sort. My storage is RAID 6, a an 
>Astra SecureStor ES.
>
>
> [root@nimbus log]# more messages.1 | grep I/O
>Jul  2 17:02:30 nimbus kernel: sd 6:0:0:0: rejecting I/O to offline device
>Jul  2 17:02:30 nimbus kernel: sd 6:0:0:0: rejecting I/O to offline device
>Jul  2 17:02:30 nimbus kernel: sr 5:0:0:0: rejecting I/O to offline device

<snip>

What does the web gui log on the Astra ES tell you?

If the Astra supports syslogging (I assume it does as it is billed as
"enterprise class") you should configure that to facilitate consistent error
information gathering--i.e. grep everything from one terminal session.

-- 
Stan

Hi,
After I got into work on Tues, I looked at the log files from the web interface for the Astra RAID controller. I also contacted support and sent them a system report, which contains the logs as well as information about the system. Neither he, nor I, saw any problems in the log files. The support person said he could not find any problems with the units. The only thing he mentioned was that I should turn off the SMART daemon, because it does not work with RAID units. I'll note also that since the day when the I/O errors occurred, I have not seen any additional errors from the units, although that may be because we are not reading or writing to them for obvious reasons.
If it turns out we can not recover the lost amount of data, I still would need to get the remaining filesystem stable, so we can restore what we can from backup and get going again. And I'm really wondering why if I grow the filesystem back to the size of the LVM logical volume, when I run xfs_repair, it shrinks back down again. For all I know, that always happens, when one runs xfs_repair on a just expanded file system. I'll check into the syslogging capability of the Astra, but as of now, I have to look at its separate log files from the web gui.

thanks,
Eli




_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2010-07-29 19:19 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-09 23:07 filesystem shrinks after using xfs_repair Eli Morris
2010-07-10  8:16 ` Stan Hoeppner
2010-07-24 21:09 ` Eric Sandeen
  -- strict thread matches above, loose matches on Subject: below --
2010-07-11  6:32 Eli Morris
2010-07-11 10:56 ` Stan Hoeppner
2010-07-11 16:29 ` Emmanuel Florac
2010-07-12  1:10 Eli Morris
2010-07-12  2:24 ` Stan Hoeppner
2010-07-12 11:47 ` Emmanuel Florac
2010-07-23  8:30   ` Eli Morris
2010-07-23 10:23     ` Emmanuel Florac
2010-07-23 16:36       ` Eli Morris
2010-07-24  0:54     ` Dave Chinner
2010-07-24  1:08       ` Eli Morris
2010-07-24  2:39         ` Dave Chinner
2010-07-26  3:20           ` Eli Morris
2010-07-26  3:45             ` Dave Chinner
2010-07-26  4:04               ` Eli Morris
2010-07-26  5:57                 ` Michael Monnerie
2010-07-26  6:06                 ` Dave Chinner
2010-07-26  6:46                   ` Eli Morris
2010-07-26  8:40                     ` Michael Monnerie
2010-07-26  9:49                     ` Emmanuel Florac
2010-07-26 17:22                       ` Eli Morris
2010-07-26 18:33                         ` Stuart Rowan
2010-07-26 21:06                         ` Emmanuel Florac
2010-07-27  5:02                           ` Eli Morris
2010-07-27  6:48                             ` Stan Hoeppner
2010-07-27  8:21                             ` Michael Monnerie
2010-07-26 10:20                     ` Dave Chinner
2010-07-28  5:12                       ` Eli Morris
2010-07-29 19:22                         ` Eli Morris
2010-07-29 22:09                           ` Emmanuel Florac
2010-07-29 22:48                             ` Eli Morris
2010-07-29 23:01                           ` Dave Chinner
2010-07-29 23:15                             ` Eli Morris
2010-07-30  0:39                               ` Michael Monnerie
2010-07-30  1:49                                 ` Eli Morris
2010-07-30  7:15                               ` Emmanuel Florac
2010-07-30  7:57                                 ` Christoph Hellwig
2010-07-30 10:23                                   ` Michael Monnerie
2010-07-30 10:29                                     ` Christoph Hellwig
2010-07-30 12:40                                       ` Michael Monnerie
2010-07-30 13:17                                       ` Emmanuel Florac
2010-07-12  6:39 Eli Morris

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox