public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* XFS: Assertion failed: fs_is_ok, file: fs/xfs/xfs_alloc.c, line: 1590
@ 2014-01-27  7:12 Dmitriy Yu Leonov
  2014-01-27 23:13 ` Dave Chinner
  0 siblings, 1 reply; 2+ messages in thread
From: Dmitriy Yu Leonov @ 2014-01-27  7:12 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 885 bytes --]


Hello, dear developers

Faced  with  the  problem of using XFS. I'm use the XFS file system on
the  server without problems three years. Recently discovered that the
disk (raid-array) with XFS is not available.

The logs raid controller appeared: 2014-01-24 07:12:34 H/W Monitor Raid
Powered On

When I restart the server, I found that the raid array is not mount on
the mount point /dev/sdb1 (filesystem XFS).

When  I  run  the utility xfs_repair -P /dev/sdb1 it hangs. When I run
mount  /dev/sdb1  not  issued  any  errors and application also hangs.
Task's cannot finish even the command kill -9 <pid>.

Please  help  me  understand the causes of failure and repair the file
system on the XFS disk. Debugging information I attach the file to the
letter.

I hope for Your help. Thanks in advance.

--
  Sincerely, Dmitry Leonov.

(See attached file: Report_XFS_Volume_20140127.txt)

[-- Attachment #1.2: Type: text/html, Size: 1135 bytes --]

[-- Attachment #2: Report_XFS_Volume_20140127.txt --]
[-- Type: application/octet-stream, Size: 12674 bytes --]

uname -a
Linux host 3.10.25-gentoo #2 SMP Fri Jan 24 14:13:10 MSK 2014 x86_64 Intel(R) Xeon(TM) CPU 3.00GHz GenuineIntel GNU/Linux

emerge --info
Portage 2.2.7 (default/linux/amd64/13.0, gcc-4.7.3, glibc-2.17, 3.10.25-gentoo x86_64)
=================================================================
System uname: Linux-3.10.25-gentoo-x86_64-Intel-R-_Xeon-TM-_CPU_3.00GHz-with-gentoo-2.2
KiB Mem:     4053236 total,    888280 free
KiB Swap:    4008212 total,   4008212 free
Timestamp of tree: Sun, 26 Jan 2014 07:00:01 +0000
ld GNU ld (GNU Binutils) 2.23.2
app-shells/bash:          4.2_p45
dev-java/java-config:     2.1.12-r1
dev-lang/python:          2.4.4-r13, 2.5.4-r5, 2.6.8-r3, 2.7.5-r3, 3.1.5-r1, 3.2.5-r3, 3.3.3
dev-util/cmake:           2.8.11.2
dev-util/pkgconfig:       0.28
sys-apps/baselayout:      2.2
sys-apps/openrc:          0.12.4
sys-apps/sandbox:         2.6-r1
sys-devel/autoconf:       2.13, 2.69
sys-devel/automake:       1.6.3, 1.7.9-r1, 1.9.6-r3, 1.10.3, 1.11.6, 1.12.6, 1.13.4
sys-devel/binutils:       2.23.2
sys-devel/gcc:            4.1.2, 4.3.6-r1, 4.4.7, 4.5.4, 4.6.3, 4.7.3-r1
sys-devel/gcc-config:     1.7.3
sys-devel/libtool:        2.4.2
sys-devel/make:           3.82-r4
sys-kernel/linux-headers: 3.9 (virtual/os-headers)
sys-libs/glibc:           2.17
Repositories: gentoo
ACCEPT_KEYWORDS="amd64"
ACCEPT_LICENSE="* -@EULA dlj-1.1"
CBUILD="x86_64-pc-linux-gnu"
CFLAGS="-O2 -pipe"
CHOST="x86_64-pc-linux-gnu"
CONFIG_PROTECT="/etc /usr/share/config /usr/share/gnupg/qualified.txt"
CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/fonts/fonts.conf /etc/gconf /etc/gentoo-release /etc/php/apache2-php5.5/ext-active/ /etc/php/cgi-php5.5/ext-active/ /etc/php/cli-php5.5/ext-active/ /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo"
CXXFLAGS="-O2 -pipe"
DISTDIR="/usr/portage/distfiles"
EMERGE_DEFAULT_OPTS="--autounmask=y --autounmask-write=y"
FCFLAGS="-O2 -pipe"
FEATURES="assume-digests binpkg-logs config-protect-if-modified distlocks ebuild-locks fixlafiles merge-sync news parallel-fetch preserve-libs protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch userpriv usersandbox usersync"
FFLAGS="-O2 -pipe"
GENTOO_MIRRORS="http://mirror.yandex.ru/gentoo-distfiles/ ftp://mirror.yandex.ru/gentoo-distfiles/ "
LANG="ru_RU.UTF-8"
LC_ALL=""
LDFLAGS="-Wl,-O1 -Wl,--as-needed"
MAKEOPTS="-j4"
PKGDIR="/usr/portage/packages"
PORTAGE_CONFIGROOT="/"
PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --omit-dir-times --compress --force --whole-file --delete --stats --human-readable --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages"
PORTAGE_TMPDIR="/var/tmp"
PORTDIR="/usr/portage"
PORTDIR_OVERLAY=""
USE="X acl amd64 berkdb bzip2 cli cracklib crypt cxx dbus dri fortran gdbm iconv mmx modules multilib ncurses nls nptl opengl openmp pam pcre pic qt3 qt3support qt4 readline session slang sse sse2 ssl svg tcpd unicode utf8 zlib" ABI_X86="64" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" APACHE2_MODULES="authn_core authz_core socache_shmcb unixd actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" CALLIGRA_FEATURES="kexi words flow plan sheets stage tables krita karbon braindump author" CAMERAS="ptp2" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ublox ubx" INPUT_DEVICES="keyboard mouse" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LIBREOFFICE_EXTENSIONS="presenter-console presenter-minimizer" LINGUAS="ru en" OFFICE_IMPLEMENTATION="libreoffice" PHP_TARGETS="php5-5" PYTHON_SINGLE_TARGET="python2_7" PYTHON_TARGETS="python2_7 python3_3" RUBY_TARGETS="ruby19 ruby18" USERLAND="GNU" VIDEO_CARDS="fbdev glint intel mach64 mga nouveau nv r128 radeon savage sis tdfx trident vesa via vmware dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account"
Unset:  CPPFLAGS, CTARGET, INSTALL_MASK, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS, SYNC, USE_PYTHON

xfs_repair -V
xfs_repair version 3.1.10

CPU
cat /proc/cpuinfo | grep ^processor |wc -l
8

cat /proc/meminfo 
MemTotal:        4053236 kB
MemFree:          988648 kB
Buffers:          584656 kB
Cached:          1541776 kB
SwapCached:            0 kB
Active:          1508616 kB
Inactive:        1263696 kB
Active(anon):     665372 kB
Inactive(anon):     9376 kB
Active(file):     843244 kB
Inactive(file):  1254320 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       4008212 kB
SwapFree:        4008212 kB
Dirty:              4900 kB
Writeback:             0 kB
AnonPages:        645728 kB
Mapped:            46112 kB
Shmem:             28880 kB
Slab:             241332 kB
SReclaimable:     210156 kB
SUnreclaim:        31176 kB
KernelStack:        2448 kB
PageTables:        11068 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     6034828 kB
Committed_AS:    2004888 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       36816 kB
VmallocChunk:   34359668736 kB
DirectMap4k:        7552 kB
DirectMap2M:     4186112 kB

cat /proc/mounts
rootfs / rootfs rw 0 0
proc /proc proc rw,relatime 0 0
udev /dev devtmpfs rw,nosuid,relatime,size=10240k,nr_inodes=506141,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
/dev/sda3 / ext3 rw,noatime,errors=continue,barrier=1,data=ordered 0 0
tmpfs /run tmpfs rw,nosuid,nodev,relatime,size=405324k,mode=755 0 0
mqueue /dev/mqueue mqueue rw,nosuid,nodev,noexec,relatime 0 0
shm /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime 0 0
configfs /sys/kernel/config configfs rw,nosuid,nodev,noexec,relatime 0 0
cgroup_root /sys/fs/cgroup tmpfs rw,nosuid,nodev,noexec,relatime,size=10240k,mode=755 0 0
fusectl /sys/fs/fuse/connections fusectl rw,nosuid,nodev,noexec,relatime 0 0
openrc /sys/fs/cgroup/openrc cgroup rw,nosuid,nodev,noexec,relatime,release_agent=/lib64/rc/sh/cgroup-release-agent.sh,name=openrc 0 0
cpuset /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cpuacct /sys/fs/cgroup/cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpuacct 0 0

cat /proc/partitions 
major minor  #blocks  name

  11        0    1048575 sr0
   8        0   72613056 sda
   8        1     104391 sda1
   8        2    4008217 sda2
   8        3   68493127 sda3
   8       16 7324216320 sdb
   8       17 7324216286 sdb1

RAID: lspci
0a:0e.0 RAID bus controller: Areca Technology Corp. ARC-1260 16-Port PCI-Express to SATA RAID Controller
12x750.2GB SATA disk's + 3 HotSpare

Write cache enable

dmesg
[   23.667987] XFS (sdb1): Mounting Filesystem
[   23.803554] XFS (sdb1): Starting recovery (logdev: internal)
[   25.300056] XFS: Assertion failed: fs_is_ok, file: fs/xfs/xfs_alloc.c, line: 1590
[   25.300231] ------------[ cut here ]------------
[   25.300292] kernel BUG at fs/xfs/xfs_message.c:108!
[   25.300361] invalid opcode: 0000 [#1] SMP
[   25.300427] Modules linked in: radeon mperf freq_table i2c_algo_bit drm_kms_helper kvm ttm ppdev drm i2c_i801 e1000e ioatdma floppy dca i2c_core backlight parport_pc i5k_amb microcode pata_acpi rtc_cmos button processor pcspkr thermal_sys xts gf128mul aes_x86_64 cbc sha256_generic libiscsi scsi_transport_iscsi tg3 libphy ptp pps_core e1000 fuse nfs lockd sunrpc jfs multipath linear raid10 raid456 async_pq async_xor xor async_memcpy async_raid6_recov raid6_pq async_tx raid1 raid0 dm_snapshot dm_crypt dm_mirror dm_region_hash dm_log dm_mod hid_sunplus hid_sony hid_samsung hid_pl hid_petalynx hid_gyration sl811_hcd usbhid xhci_hcd ohci_hcd uhci_hcd usb_storage ehci_pci ehci_hcd usbcore usb_common aic94xx libsas lpfc crc_t10dif qla2xxx megaraid_sas megaraid_mbox megaraid_mm megaraid aacraid sx8 DAC960
[   25.302913]  cciss 3w_9xxx 3w_xxxx mptsas scsi_transport_sas mptfc scsi_transport_fc scsi_tgt mptspi mptscsih mptbase atp870u dc395x qla1280 imm parport dmx3191d sym53c8xx gdth advansys initio BusLogic arcmsr aic7xxx aic79xx scsi_transport_spi sg pdc_adma sata_inic162x sata_mv ata_piix ahci libahci sata_qstorsata_vsc sata_uli sata_sis sata_sx4 sata_nv sata_via sata_svw sata_sil24 sata_sil sata_promise pata_sl82c105 pata_cs5530 pata_cs5520 pata_via pata_jmicron pata_marvell pata_sis pata_netcell pata_sc1200 pata_pdc202xx_old pata_triflex pata_atiixp pata_opti pata_amd pata_ali pata_it8213 pata_pcmcia pcmcia pcmcia_core pata_ns87415 pata_ns87410 pata_serverworks pata_artop pata_it821x pata_optidma pata_hpt3x2n pata_hpt3x3 pata_hpt37x pata_hpt366 pata_cmd64x pata_efar pata_rz1000 pata_sil680 pata_radisys
[   25.305220]  pata_pdc2027x pata_mpiix libata
[   25.305328] CPU: 2 PID: 17239 Comm: mount Not tainted 3.10.25-gentoo #2
[   25.305416] Hardware name: Supermicro X7DB8/X7DB8, BIOS 6.00 01/26/2007
[   25.310021] task: ffff8801275ea680 ti: ffff8801284d2000 task.ti: ffff8801284d2000
[   25.310021] RIP: 0010:[<ffffffff811c9e93>]  [<ffffffff811c9e93>] assfail+0x1d/0x1f
[   25.310021] RSP: 0018:ffff8801284d3ad8  EFLAGS: 00010296
[   25.310021] RAX: 0000000000000045 RBX: 0000000000000000 RCX: 00000000000000a9
[   25.310021] RDX: 000000000000002e RSI: 0000000000000046 RDI: ffffffff8177f224
[   25.310021] RBP: ffff8801284d3ad8 R08: 0000000000000000 R09: ffff880126af9010
[   25.310021] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8801277f5cb0
[   25.310021] R13: 0000000000000004 R14: 00000000008e1e8e R15: ffff88012801eea8
[   25.310021] FS:  00007f5812773780(0000) GS:ffff88012fc80000(0000) knlGS:0000000000000000
[   25.310021] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[   25.310021] CR2: 00007f581236d2c0 CR3: 000000012ac16000 CR4: 00000000000007e0
[   25.310021] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   25.310021] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[   25.310021] Stack:
[   25.310021]  ffff8801284d3b78 ffffffff811d238a 0000000000000000 0000000000000000
[   25.310021]  0000000000000001 ffff88012629b500 ffff8801278ca800 0000000000000000
[   25.310021]  0000000800000000 0001000000000000 0000000100000000 000000012629b500
[   25.310021] Call Trace:
[   25.310021]  [<ffffffff811d238a>] xfs_free_ag_extent+0xe4/0x769
[   25.310021]  [<ffffffff811d3c47>] xfs_free_extent+0xf5/0x13e
[   25.310021]  [<ffffffff811cfbd4>] ? kmem_zone_alloc+0x5e/0xaa
[   25.310021]  [<ffffffff8120af5b>] xlog_recover_process_efi+0x145/0x19a
[   25.310021]  [<ffffffff8120b03c>] xlog_recover_process_efis+0x8c/0xd1
[   25.310021]  [<ffffffff8120bd56>] xlog_recover_finish+0x18/0x9a
[   25.310021]  [<ffffffff812124f7>] xfs_log_mount_finish+0x22/0x5a
[   25.310021]  [<ffffffff8120ea83>] xfs_mountfs+0x4fa/0x609
[   25.310021]  [<ffffffff811cc934>] xfs_fs_fill_super+0x25b/0x312
[   25.310021]  [<ffffffff810c3dfc>] mount_bdev+0x13e/0x199
[   25.310021]  [<ffffffff811cc6d9>] ? xfs_finish_flags+0x10b/0x10b
[   25.310021]  [<ffffffff811cac78>] xfs_fs_mount+0x10/0x12
[   25.310021]  [<ffffffff810c46e3>] mount_fs+0x12/0xab
[   25.310021]  [<ffffffff810d88b1>] vfs_kern_mount+0x64/0xde
[   25.310021]  [<ffffffff810da775>] do_mount+0x681/0x7eb
[   25.310021]  [<ffffffff810a3470>] ? strndup_user+0x36/0x4c
[   25.310021]  [<ffffffff810da962>] SyS_mount+0x83/0xbd
[   25.310021]  [<ffffffff814a5d52>] system_call_fastpath+0x16/0x1b
[   25.310021] Code: 48 c7 c7 df bf 59 81 e8 1b 39 e6 ff 5d c3 55 48 89 f1 41 89 d0 48 c7 c6 f4 bf 59 81 48 89 fa 31 c0 48 89 e5 31 ff e8 aa fc ff ff <0f> 0b 55 48 63 f6 49 89 f9 41 b8 01 00 00 00 b9 10 00 00 00 ba
[   25.310021] RIP  [<ffffffff811c9e93>] assfail+0x1d/0x1f
[   25.310021]  RSP <ffff8801284d3ad8>
[   25.627768] ---[ end trace 53351e28beb29186 ]---




[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: XFS: Assertion failed: fs_is_ok, file: fs/xfs/xfs_alloc.c, line: 1590
  2014-01-27  7:12 XFS: Assertion failed: fs_is_ok, file: fs/xfs/xfs_alloc.c, line: 1590 Dmitriy Yu Leonov
@ 2014-01-27 23:13 ` Dave Chinner
  0 siblings, 0 replies; 2+ messages in thread
From: Dave Chinner @ 2014-01-27 23:13 UTC (permalink / raw)
  To: Dmitriy Yu Leonov; +Cc: xfs

On Mon, Jan 27, 2014 at 11:12:15AM +0400, Dmitriy Yu Leonov wrote:
> 
> Hello, dear developers
> 
> Faced  with  the  problem of using XFS. I'm use the XFS file system on
> the  server without problems three years. Recently discovered that the
> disk (raid-array) with XFS is not available.
> 
> The logs raid controller appeared: 2014-01-24 07:12:34 H/W Monitor Raid
> Powered On

So something went wrong with the HW RAID, and then you found errors
in the filesystem?

> When I restart the server, I found that the raid array is not mount on
> the mount point /dev/sdb1 (filesystem XFS).

It failed to mount with the stack trace that you attached? If so,
there's a corrupt freespace tree in the filesystem.

> When  I  run  the utility xfs_repair -P /dev/sdb1 it hangs. When I run
> mount  /dev/sdb1  not  issued  any  errors and application also hangs.
> Task's cannot finish even the command kill -9 <pid>.

First of all, I'd suggest updating to at least version 3.1.11 of
xfsprogs. If it still hangs, then it's quite likely there something
still wrong with your HW RAID.

Your first step is to make sure your HW RAID is healthy before
trying to repair or mount the filesystem....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-01-27 23:14 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-27  7:12 XFS: Assertion failed: fs_is_ok, file: fs/xfs/xfs_alloc.c, line: 1590 Dmitriy Yu Leonov
2014-01-27 23:13 ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox