* [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272
@ 2013-05-21 12:23 Arkadiusz Bubała
2013-05-21 12:31 ` Arkadiusz Bubała
2013-05-21 23:39 ` Dave Chinner
0 siblings, 2 replies; 8+ messages in thread
From: Arkadiusz Bubała @ 2013-05-21 12:23 UTC (permalink / raw)
To: xfs
Hello,
I've got a call trace which should be fixed by "drop buffer io reference
when a bad bio is built" patch (http://patchwork.xfs.org/patch/3956/).
Error occured on already patched Linux kernel 3.2.42.
Test environment consist two machines target and initiator.
First machine works as target with QLogic Corp. ISP2432-based 4Gb Fibre
Channel device. Storage is placed on two KINGSTON SNV425S SSD working as
RAID0 array. RAID is managed by LSI MegaRAID SAS 1068 controller.
Second machine works as initiator with the same QLogic card.
After few days of running test script I got following call trace and XFS
stopped working.
[90012.196963] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0,
file: fs/xfs/xfs_mount.c, line: 272
[90012.196982] ------------[ cut here ]------------
[90012.196984] kernel BUG at fs/xfs/xfs_message.c:101!
[90012.196987] invalid opcode: 0000 [#1] SMP
[90012.196990] CPU 2
[90012.196992] Modules linked in: iscsi_scst(O) scst_vdisk(O) libcrc32c
qla2x00tgt(O) scst(O) ext2 drbd(O) iscsi_tcp libiscsi_tcp libiscsi
scsi_transport_iscsi bonding qla2xxx(O) sg scsi_transport_fc
megaraid_sas bnx2 acpi_power_meter usbserial uhci_hcd ohci_hcd ehci_hcd
aufs [last unloaded: megaraid_sas]
[90012.197013]
[90012.197016] Pid: 10262, comm: mount Tainted: G O
3.2.42-oe64-00000-g12db8b5 #14 Dell Inc. PowerEdge R510/0DPRKF
[90012.197022] RIP: 0010:[<ffffffff812ff8ed>] [<ffffffff812ff8ed>]
assfail+0x1d/0x30
[90012.197031] RSP: 0000:ffff8800be43fc68 EFLAGS: 00010296
[90012.197034] RAX: 0000000000000071 RBX: ffff8800512e8cc0 RCX:
0000000000000046
[90012.197037] RDX: 0000000000000000 RSI: 0000000000000046 RDI:
ffffffff81c1c380
[90012.197039] RBP: ffff8800be43fc68 R08: 0000000000000006 R09:
000000000000ffff
[90012.197042] R10: 0000000000000006 R11: 000000000000000a R12:
ffff8800a41f4800
[90012.197045] R13: 0000000000000000 R14: ffff8800a41f49e8 R15:
ffff8800a41f49f8
[90012.197049] FS: 0000000000000000(0000) GS:ffff88012b240000(0063)
knlGS:00000000f75456c0
[90012.197052] CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
[90012.197055] CR2: 000000000818e508 CR3: 000000001d9d7000 CR4:
00000000000006e0
[90012.197058] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[90012.197061] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[90012.197064] Process mount (pid: 10262, threadinfo ffff8800be43e000,
task ffff88007fe64a60)
[90012.197066] Stack:
[90012.197068] ffff8800be43fca8 ffffffff8134eb7a 0000000000000005
ffff8800a41f4800
[90012.197072] ffff8800a41f4818 0000000000000005 ffff8800a41f4800
0000000000000000
[90012.197076] ffff8800be43fcf8 ffffffff8135054d 0000000000000000
0000000000000000
[90012.197080] Call Trace:
[90012.197088] [<ffffffff8134eb7a>] xfs_free_perag+0x8a/0xc0
[90012.197092] [<ffffffff8135054d>] xfs_mountfs+0x31d/0x700
[90012.197097] [<ffffffff81301fab>] xfs_fs_fill_super+0x1cb/0x270
[90012.197103] [<ffffffff811476da>] mount_bdev+0x19a/0x1d0
[90012.197107] [<ffffffff81301de0>] ? xfs_fs_write_inode+0x180/0x180
[90012.197114] [<ffffffff8138b016>] ? selinux_sb_copy_data+0x156/0x1d0
[90012.197118] [<ffffffff81300200>] xfs_fs_mount+0x10/0x20
[90012.197123] [<ffffffff81146df1>] mount_fs+0x41/0x180
[90012.197129] [<ffffffff8115f7ae>] vfs_kern_mount+0x5e/0xc0
[90012.197133] [<ffffffff8116075e>] do_kern_mount+0x4e/0x100
[90012.197138] [<ffffffff81161f26>] do_mount+0x516/0x740
[90012.197144] [<ffffffff811064e9>] ? __get_free_pages+0x9/0x40
[90012.197150] [<ffffffff81187cc2>] compat_sys_mount+0xa2/0x220
[90012.197156] [<ffffffff8178be83>] ia32_do_call+0x13/0x13
[90012.197158] Code: 66 66 90 66 66 66 90 66 66 66 90 66 66 90 55 41 89
d0 48 89 f1 48 89 fa 48 c7 c6 b8 bf 9b 81 31 ff 48 89 e5 31 c0 e8 53 ff
ff ff <0f> 0b eb fe 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 31
[90012.197189] RIP [<ffffffff812ff8ed>] assfail+0x1d/0x30
[90012.197193] RSP <ffff8800be43fc68>
[90012.197196] ---[ end trace ed2e349225f77763 ]---
[90015.960377] XFS (dm-44): xfs_log_force: error 5 returned.
Test scripts gets as a parameter device name. It creates volume group,
two logical volumes with XFS filesystem. XFS is created with force
overwrite and lazy_count=0 options. Also 20 logical volumes are created
for snapshots. When test environment is prepared it runs two dd
processes in background. These processes copies 50 data blocks of size
100MB from /dev/zero to each logical volume. When 50 blocks are created
they are removed and whole operation starts again.
In an infinite loop LVs are converted to snapshots (10 snapshots for
each Logical Volume). After starting all snapshots they are removed one
by one.
test_script.sh source:
#!/bin/bash
DEV=$1
if [ -z $DEV ]; then
echo "This program requires device name as parameter"
exit 1
fi
function overload()
{
COUNT=$1
temp_COUNT=$COUNT;
while [ -f ./run ]; do
while [ $COUNT -ge 1 ]; do
if [ -f ./run ]; then
dd bs=1024 count=102400 if=/dev/zero of=/$2/"_"$COUNT &>
/dev/null
fi;
let COUNT=$COUNT-1
done;
rm $2/*;
COUNT=$temp_COUNT;
done;
}
function create_vg()
{
#create physical volume
pvcreate /dev/sda
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Unable to create physical volume"
exit 1
fi
#create volume group
vgcreate -v -s 32M vg0 /dev/sda
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Unable to create volume group"
exit 1
fi
VG="vg0"
}
function create_lv()
{
local LV="$1"
#create logical volume
lvcreate -l 500 -n "$VG+$LV" "$VG"
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Unable to create LV"
exit 1
fi
mkfs -t xfs -f -l lazy-count=0 /dev/$VG/"$VG+$LV" &>/dev/null
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Can't create filesystem"
exit 1
fi
}
function create_snapshots()
{
for ((i=0; i < 20; i++)); do
if [[ $i -lt 10 ]]; then
lvcreate -l "64" -n "snap0$i" "$VG"
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Unable to create snapshot LV"
exit 1
fi
else
lvcreate -l "64" -n "snap$i" "$VG"
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Unable to create snapshot LV"
exit 1
fi
fi
done
}
function assign_snapshots()
{
for ((i=0; i < 20; i++)); do
if [[ $i -lt 10 ]]; then
lvrename "$VG" "snap0$i" "lv0+snap0$i"
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Unable to rename snapshot LV"
exit 1
fi
else
lvrename "$VG" "snap$i" "lv1+snap$i"
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Unable to rename snapshot LV"
exit 1
fi
fi
done
}
function mount_volume()
{
local MVG=$1
local MLV=$2
mkdir -p "/test/mount/$MVG+$MLV"
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Unable to create mounting point"
exit 1
fi
mount -t xfs -o
defaults,usrquota,grpquota,nouuid,noatime,nodiratime
"/dev/$MVG/$MVG+$MLV" "/test/mount/$MVG+$MLV"
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Unable to mount LV"
exit 1
fi
}
function start_overload()
{
touch ./run
mkdir -p /test/mount/$1+$2/test
overload 50 "/test/mount/$1+$2/test" $3 &
echo "overload $1 /test/mount/$1+$2/test $3 &"
sleep 4;
echo "[ OK ] copying files to $2 started"
}
function get_snapshot_status()
{
lvdisplay /dev/$1/$2 | awk ' $0~"LV snapshot status" { print $4 } '
}
function remove_snapshot()
{
local LVG=$1
local LLV=$2
local LSNAP=$3
umount "/test/mount/$LSNAP"
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Can't umount snapshot"
fi
lvremove -sf "/dev/$LVG/$LSNAP"
if [[ $? -gt 0 ]]; then
echo "[ FAIL ] Can't remove snapshot"
fi
}
function create_snapshot()
{
local LVG=$1
local LLV=$2
local LSNAP=$3
for((it=0; it<7; it++)); do
local ERROR=0
local STATUS=`get_snapshot_status $LVG $LSNAP`
if [[ "$STATUS" == "active" ]]; then
remove_snapshot $LVG "$LLV+$LSNAP"
fi
STATUS=`get_snapshot_status $LVG $LSNAP`
if [[ "$STATUS" == "active" ]]; then
remove_snapshot $LVG "$LLV+$LSNAP"
fi
CHUNKSIZE=512
for ((ile=0;ile<it/2;ile++)); do
CHUNKSIZE=$((CHUNKSIZE/2))
done
lvconvert -s -c $CHUNKSIZE "/dev/$LVG/$LVG+$LLV" "/dev/$LVG/$LSNAP"
if [[ $? -gt 0 ]]; then
ERROR=1
fi
#mount snapshot
mkdir -p "/test/mount/$LSNAP"
mount -t xfs -o nouuid,noatime "/dev/$LVG/$LSNAP"
"/test/mount/$LSNAP"
if [[ $? -gt 0 ]]; then
ERROR=2
fi
create_time=`date "+%Y-%m-%d %H:%M:%S"`
if [ $ERROR -ne 0 ]; then
remove_snapshot $LVG $LLV $LSNAP
sleep 5
else
break;
fi
done
}
function start_snap()
{
local i;
for((i=0; i<20; i++)); do
echo "Starting snap$i : `date`"
local START=`date +%s`
if [[ $i -lt 10 ]]; then
snapname="lv0+snap0"$i
create_snapshot $VG "lv0" $snapname
else
snapname="lv1+snap"$i
create_snapshot $VG "lv1" $snapname
fi
if [ -z "`lvs | grep $snapname | grep $VG+lv`" ]; then
echo "[ FAIL ] $snapname not activated !!!"
else
echo "[ OK ] $snapname activated."
fi
if [ -z "`mount | grep $snapname`" ]; then
echo "[ FAIL ] $snapname not mounted !!!" >> $LOGFILE
else
echo "[ OK ] $snapname mounted."
fi
local STOP=$[`date +%s`-$START]
echo "Starting time : $STOP s."
echo "---------------------------"
sleep 2
done;
}
function stop_snap()
{
local i
for((i=0; i<20; i++)); do
echo "Stopping snap$i : `date`"
local START=`date +%s`
if [[ $i -lt 10 ]]; then
snapname="lv0+snap0"$i
remove_snapshot $VG "lv0" $snapname
else
snapname="lv1+snap"$i
remove_snapshot $VG "lv1" $snapname
fi
if [ "`lvs | grep $snapname | grep $VG+lv`" ]; then
echo "[ FAIL ] $snapname still active !!!"
else
echo "[ OK ] $snapname deactivated."
fi;
if [ "`mount | grep $snapname`" ]; then
echo "[ FAIL ] $snapname still mounted !!!" >> $LOGFILE
else
echo "[ OK ] $snapname umounted."
fi;
local STOP=$[`date +%s`-$START]
echo "Stopping time : $STOP s."
echo "---------------------------"
sleep 2
done;
}
echo "-------- Creating vg0 on $DEV..."
create_vg
echo "[ OK ] Volume group created successfully"
echo "-------- Creating logical volumes on $VG..."
create_lv "lv0"
create_lv "lv1"
echo "[ OK ] Logical volumes created successfully"
echo "-------- Mounting logical volumes..."
mount_volume "$VG" "lv0"
mount_volume "$VG" "lv1"
echo "[ OK ] Logical volumes mounted successfully"
echo "-------- Creating snapshots..."
create_snapshots
echo "[ OK ] Snapshots created successfully"
echo "-------- Assigning snapshots..."
assign_snapshots
echo "[ OK ] Snapshots assigned successfully"
echo "-------- Start overload..."
start_overload "vg0" "lv0"
start_overload "vg0" "lv1"
while true; do
start_snap 2> /dev/null
stop_snap 2> /dev/null
done
rm ./run
--
Best regards
Arkadiusz Bubała
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272
2013-05-21 12:23 [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272 Arkadiusz Bubała
@ 2013-05-21 12:31 ` Arkadiusz Bubała
2013-05-21 18:26 ` Ben Myers
2013-05-21 23:39 ` Dave Chinner
1 sibling, 1 reply; 8+ messages in thread
From: Arkadiusz Bubała @ 2013-05-21 12:31 UTC (permalink / raw)
To: xfs
On 21.05.2013 14:23, Arkadiusz Bubała wrote:
> Hello,
> I've got a call trace which should be fixed by "drop buffer io
> reference when a bad bio is built" patch
> (http://patchwork.xfs.org/patch/3956/). Error occured on already
> patched Linux kernel 3.2.42.
>
> Test environment consist two machines target and initiator.
> First machine works as target with QLogic Corp. ISP2432-based 4Gb
> Fibre Channel device. Storage is placed on two KINGSTON SNV425S SSD
> working as RAID0 array. RAID is managed by LSI MegaRAID SAS 1068
> controller.
> Second machine works as initiator with the same QLogic card.
>
> After few days of running test script I got following call trace and
> XFS stopped working.
>
Sorry I provided incomplete dmesg logs. These should provide more
information:
[90011.884812] XFS (dm-46): metadata I/O error: block 0x1 ("xfs_trans_read_buf") error 5 buf count 512
[90011.941376] XFS (dm-46): xlog_recover_check_summary agf read failed agno 0 error 5
[90011.987890] XFS (dm-46): metadata I/O error: block 0x2 ("xfs_trans_read_buf") error 5 buf count 512
[90012.044179] XFS (dm-46): xlog_recover_check_summary agi read failed agno 0 error 5
[90012.092176] XFS (dm-46): metadata I/O error: block 0x7d0001 ("xfs_trans_read_buf") error 5 buf count 512
[90012.150379] XFS (dm-46): xlog_recover_check_summary agf read failed agno 1 error 5
[90012.196776] XFS (dm-46): metadata I/O error: block 0x7d0002 ("xfs_trans_read_buf") error 5 buf count 512
[90012.196780] XFS (dm-46): xlog_recover_check_summary agi read failed agno 1 error 5
[90012.196791] XFS (dm-46): metadata I/O error: block 0xfa0001 ("xfs_trans_read_buf") error 5 buf count 512
[90012.196795] XFS (dm-46): xlog_recover_check_summary agf read failed agno 2 error 5
[90012.196802] XFS (dm-46): metadata I/O error: block 0xfa0002 ("xfs_trans_read_buf") error 5 buf count 512
[90012.196806] XFS (dm-46): xlog_recover_check_summary agi read failed agno 2 error 5
[90012.196813] XFS (dm-46): metadata I/O error: block 0x1770001 ("xfs_trans_read_buf") error 5 buf count 512
[90012.196817] XFS (dm-46): xlog_recover_check_summary agf read failed agno 3 error 5
[90012.196823] XFS (dm-46): metadata I/O error: block 0x1770002 ("xfs_trans_read_buf") error 5 buf count 512
[90012.196827] XFS (dm-46): xlog_recover_check_summary agi read failed agno 3 error 5
[90012.196843] XFS (dm-46): metadata I/O error: block 0x40 ("xfs_trans_read_buf") error 5 buf count 8192
[90012.196847] XFS (dm-46): xfs_imap_to_bp: xfs_trans_read_buf() returned error 5.
[90012.196852] XFS (dm-46): failed to read root inode
[90012.196963] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272
[90012.196982] ------------[ cut here ]------------
[90012.196984] kernel BUG at fs/xfs/xfs_message.c:101!
[90012.196987] invalid opcode: 0000 [#1] SMP
[90012.196990] CPU 2
[90012.196992] Modules linked in: iscsi_scst(O) scst_vdisk(O) libcrc32c qla2x00tgt(O) scst(O) ext2 drbd(O) iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi bonding qla2xxx(O) sg scsi_transport_fc megaraid_sas bnx2 acpi_power_meter usbserial uhci_hcd ohci_hcd ehci_hcd aufs [last unloaded: megaraid_sas]
[90012.197013]
[90012.197016] Pid: 10262, comm: mount Tainted: G O 3.2.42-oe64-00000-g12db8b5 #14 Dell Inc. PowerEdge R510/0DPRKF
[90012.197022] RIP: 0010:[<ffffffff812ff8ed>] [<ffffffff812ff8ed>] assfail+0x1d/0x30
[90012.197031] RSP: 0000:ffff8800be43fc68 EFLAGS: 00010296
[90012.197034] RAX: 0000000000000071 RBX: ffff8800512e8cc0 RCX: 0000000000000046
[90012.197037] RDX: 0000000000000000 RSI: 0000000000000046 RDI: ffffffff81c1c380
[90012.197039] RBP: ffff8800be43fc68 R08: 0000000000000006 R09: 000000000000ffff
[90012.197042] R10: 0000000000000006 R11: 000000000000000a R12: ffff8800a41f4800
[90012.197045] R13: 0000000000000000 R14: ffff8800a41f49e8 R15: ffff8800a41f49f8
[90012.197049] FS: 0000000000000000(0000) GS:ffff88012b240000(0063) knlGS:00000000f75456c0
[90012.197052] CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
[90012.197055] CR2: 000000000818e508 CR3: 000000001d9d7000 CR4: 00000000000006e0
[90012.197058] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[90012.197061] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[90012.197064] Process mount (pid: 10262, threadinfo ffff8800be43e000, task ffff88007fe64a60)
[90012.197066] Stack:
[90012.197068] ffff8800be43fca8 ffffffff8134eb7a 0000000000000005 ffff8800a41f4800
[90012.197072] ffff8800a41f4818 0000000000000005 ffff8800a41f4800 0000000000000000
[90012.197076] ffff8800be43fcf8 ffffffff8135054d 0000000000000000 0000000000000000
[90012.197080] Call Trace:
[90012.197088] [<ffffffff8134eb7a>] xfs_free_perag+0x8a/0xc0
[90012.197092] [<ffffffff8135054d>] xfs_mountfs+0x31d/0x700
[90012.197097] [<ffffffff81301fab>] xfs_fs_fill_super+0x1cb/0x270
[90012.197103] [<ffffffff811476da>] mount_bdev+0x19a/0x1d0
[90012.197107] [<ffffffff81301de0>] ? xfs_fs_write_inode+0x180/0x180
[90012.197114] [<ffffffff8138b016>] ? selinux_sb_copy_data+0x156/0x1d0
[90012.197118] [<ffffffff81300200>] xfs_fs_mount+0x10/0x20
[90012.197123] [<ffffffff81146df1>] mount_fs+0x41/0x180
[90012.197129] [<ffffffff8115f7ae>] vfs_kern_mount+0x5e/0xc0
[90012.197133] [<ffffffff8116075e>] do_kern_mount+0x4e/0x100
[90012.197138] [<ffffffff81161f26>] do_mount+0x516/0x740
[90012.197144] [<ffffffff811064e9>] ? __get_free_pages+0x9/0x40
[90012.197150] [<ffffffff81187cc2>] compat_sys_mount+0xa2/0x220
[90012.197156] [<ffffffff8178be83>] ia32_do_call+0x13/0x13
[90012.197158] Code: 66 66 90 66 66 66 90 66 66 66 90 66 66 90 55 41 89 d0 48 89 f1 48 89 fa 48 c7 c6 b8 bf 9b 81 31 ff 48 89 e5 31 c0 e8 53 ff ff ff<0f> 0b eb fe 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 31
[90012.197189] RIP [<ffffffff812ff8ed>] assfail+0x1d/0x30
[90012.197193] RSP<ffff8800be43fc68>
[90012.197196] ---[ end trace ed2e349225f77763 ]---
[90015.960377] XFS (dm-44): xfs_log_force: error 5 returned.
--
Best regards
Arkadiusz Bubała
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272
2013-05-21 12:31 ` Arkadiusz Bubała
@ 2013-05-21 18:26 ` Ben Myers
2013-05-22 8:06 ` Arkadiusz Bubała
0 siblings, 1 reply; 8+ messages in thread
From: Ben Myers @ 2013-05-21 18:26 UTC (permalink / raw)
To: Arkadiusz Bubała; +Cc: xfs
Hey Arkadiusz,
On Tue, May 21, 2013 at 02:31:17PM +0200, Arkadiusz Bubała wrote:
> On 21.05.2013 14:23, Arkadiusz Bubała wrote:
> >Hello,
> >I've got a call trace which should be fixed by "drop buffer io
> >reference when a bad bio is built" patch
> >(http://patchwork.xfs.org/patch/3956/). Error occured on already
> >patched Linux kernel 3.2.42.
> >
> >Test environment consist two machines target and initiator.
> >First machine works as target with QLogic Corp. ISP2432-based 4Gb
> >Fibre Channel device. Storage is placed on two KINGSTON SNV425S
> >SSD working as RAID0 array. RAID is managed by LSI MegaRAID SAS
> >1068 controller.
> >Second machine works as initiator with the same QLogic card.
> >
> >After few days of running test script I got following call trace
> >and XFS stopped working.
> >
> Sorry I provided incomplete dmesg logs. These should provide more
> information:
>
> [90011.884812] XFS (dm-46): metadata I/O error: block 0x1 ("xfs_trans_read_buf") error 5 buf count 512
> [90011.941376] XFS (dm-46): xlog_recover_check_summary agf read failed agno 0 error 5
> [90011.987890] XFS (dm-46): metadata I/O error: block 0x2 ("xfs_trans_read_buf") error 5 buf count 512
> [90012.044179] XFS (dm-46): xlog_recover_check_summary agi read failed agno 0 error 5
> [90012.092176] XFS (dm-46): metadata I/O error: block 0x7d0001 ("xfs_trans_read_buf") error 5 buf count 512
> [90012.150379] XFS (dm-46): xlog_recover_check_summary agf read failed agno 1 error 5
> [90012.196776] XFS (dm-46): metadata I/O error: block 0x7d0002 ("xfs_trans_read_buf") error 5 buf count 512
> [90012.196780] XFS (dm-46): xlog_recover_check_summary agi read failed agno 1 error 5
> [90012.196791] XFS (dm-46): metadata I/O error: block 0xfa0001 ("xfs_trans_read_buf") error 5 buf count 512
> [90012.196795] XFS (dm-46): xlog_recover_check_summary agf read failed agno 2 error 5
> [90012.196802] XFS (dm-46): metadata I/O error: block 0xfa0002 ("xfs_trans_read_buf") error 5 buf count 512
> [90012.196806] XFS (dm-46): xlog_recover_check_summary agi read failed agno 2 error 5
> [90012.196813] XFS (dm-46): metadata I/O error: block 0x1770001 ("xfs_trans_read_buf") error 5 buf count 512
> [90012.196817] XFS (dm-46): xlog_recover_check_summary agf read failed agno 3 error 5
> [90012.196823] XFS (dm-46): metadata I/O error: block 0x1770002 ("xfs_trans_read_buf") error 5 buf count 512
> [90012.196827] XFS (dm-46): xlog_recover_check_summary agi read failed agno 3 error 5
> [90012.196843] XFS (dm-46): metadata I/O error: block 0x40 ("xfs_trans_read_buf") error 5 buf count 8192
> [90012.196847] XFS (dm-46): xfs_imap_to_bp: xfs_trans_read_buf() returned error 5.
> [90012.196852] XFS (dm-46): failed to read root inode
> [90012.196963] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272
Any 'xlog_space_left' messages as were reported with the commit you mentioned?
Have you attempted to backport the patch yet?
Regards,
Ben
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272
2013-05-21 12:23 [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272 Arkadiusz Bubała
2013-05-21 12:31 ` Arkadiusz Bubała
@ 2013-05-21 23:39 ` Dave Chinner
2013-05-22 8:11 ` Arkadiusz Bubała
1 sibling, 1 reply; 8+ messages in thread
From: Dave Chinner @ 2013-05-21 23:39 UTC (permalink / raw)
To: Arkadiusz Bubała; +Cc: xfs
On Tue, May 21, 2013 at 02:23:20PM +0200, Arkadiusz Bubała wrote:
> Hello,
> I've got a call trace which should be fixed by "drop buffer io
> reference when a bad bio is built" patch
> (http://patchwork.xfs.org/patch/3956/). Error occured on already
> patched Linux kernel 3.2.42.
That's an old kernel. Can you reproduce on a current TOT kernel?
It's entirely possible that this problem has been fixed as we
definitely mae some changes to the mount error handling path since
3.2....
>
> Test environment consist two machines target and initiator.
> First machine works as target with QLogic Corp. ISP2432-based 4Gb
> Fibre Channel device. Storage is placed on two KINGSTON SNV425S SSD
> working as RAID0 array. RAID is managed by LSI MegaRAID SAS 1068
> controller.
> Second machine works as initiator with the same QLogic card.
>
> After few days of running test script I got following call trace and
> XFS stopped working.
Can you narrow this down from "takes several days" to the simplest
possible reproducer? It happened due to IO errors during mount, so
maybe you can did that part out of your script and give us a test
case that reproduces on the first mount?
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272
2013-05-21 23:39 ` Dave Chinner
@ 2013-05-22 8:11 ` Arkadiusz Bubała
2013-05-22 14:04 ` Eric Sandeen
0 siblings, 1 reply; 8+ messages in thread
From: Arkadiusz Bubała @ 2013-05-22 8:11 UTC (permalink / raw)
To: xfs
Hello,
> On Tue, May 21, 2013 at 02:23:20PM +0200, Arkadiusz Bubała wrote:
>
>> Hello,
>> I've got a call trace which should be fixed by "drop buffer io
>> reference when a bad bio is built" patch
>> (http://patchwork.xfs.org/patch/3956/). Error occured on already
>> patched Linux kernel 3.2.42.
>>
> That's an old kernel. Can you reproduce on a current TOT kernel?
> It's entirely possible that this problem has been fixed as we
> definitely mae some changes to the mount error handling path since
> 3.2....
>
>
Ok. I'll try.
>> Test environment consist two machines target and initiator.
>> First machine works as target with QLogic Corp. ISP2432-based 4Gb
>> Fibre Channel device. Storage is placed on two KINGSTON SNV425S SSD
>> working as RAID0 array. RAID is managed by LSI MegaRAID SAS 1068
>> controller.
>> Second machine works as initiator with the same QLogic card.
>>
>> After few days of running test script I got following call trace and
>> XFS stopped working.
>>
> Can you narrow this down from "takes several days" to the simplest
> possible reproducer? It happened due to IO errors during mount, so
> maybe you can did that part out of your script and give us a test
> case that reproduces on the first mount?
>
>
I 'll try. This errors occurs only on heavy load.
Is there any possibility to simulate I/O errors on XFS filesystem?
--
Best regards
Arkadiusz Bubała
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272
2013-05-22 8:11 ` Arkadiusz Bubała
@ 2013-05-22 14:04 ` Eric Sandeen
2013-05-27 12:40 ` Arkadiusz Bubała
0 siblings, 1 reply; 8+ messages in thread
From: Eric Sandeen @ 2013-05-22 14:04 UTC (permalink / raw)
To: Arkadiusz Bubała; +Cc: xfs
On 5/22/13 3:11 AM, Arkadiusz Bubała wrote:
> Hello,
>> On Tue, May 21, 2013 at 02:23:20PM +0200, Arkadiusz Bubała wrote:
>>
>>> Hello,
>>> I've got a call trace which should be fixed by "drop buffer io
>>> reference when a bad bio is built" patch
>>> (http://patchwork.xfs.org/patch/3956/). Error occured on already
>>> patched Linux kernel 3.2.42.
>>>
>> That's an old kernel. Can you reproduce on a current TOT kernel?
>> It's entirely possible that this problem has been fixed as we
>> definitely mae some changes to the mount error handling path since
>> 3.2....
>>
>>
> Ok. I'll try.
>>> Test environment consist two machines target and initiator.
>>> First machine works as target with QLogic Corp. ISP2432-based 4Gb
>>> Fibre Channel device. Storage is placed on two KINGSTON SNV425S SSD
>>> working as RAID0 array. RAID is managed by LSI MegaRAID SAS 1068
>>> controller.
>>> Second machine works as initiator with the same QLogic card.
>>>
>>> After few days of running test script I got following call trace and
>>> XFS stopped working.
>>>
>> Can you narrow this down from "takes several days" to the simplest
>> possible reproducer? It happened due to IO errors during mount, so
>> maybe you can did that part out of your script and give us a test
>> case that reproduces on the first mount?
>>
>>
> I 'll try. This errors occurs only on heavy load.
> Is there any possibility to simulate I/O errors on XFS filesystem?
You can use something like a dm-flakey or md-faulty block devices perhaps.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272
2013-05-22 14:04 ` Eric Sandeen
@ 2013-05-27 12:40 ` Arkadiusz Bubała
0 siblings, 0 replies; 8+ messages in thread
From: Arkadiusz Bubała @ 2013-05-27 12:40 UTC (permalink / raw)
To: xfs
Hello,
>>>> Test environment consist two machines target and initiator.
>>>> First machine works as target with QLogic Corp. ISP2432-based 4Gb
>>>> Fibre Channel device. Storage is placed on two KINGSTON SNV425S SSD
>>>> working as RAID0 array. RAID is managed by LSI MegaRAID SAS 1068
>>>> controller.
>>>> Second machine works as initiator with the same QLogic card.
>>>>
>>>> After few days of running test script I got following call trace and
>>>> XFS stopped working.
>>>>
>>>>
>>> Can you narrow this down from "takes several days" to the simplest
>>> possible reproducer? It happened due to IO errors during mount, so
>>> maybe you can did that part out of your script and give us a test
>>> case that reproduces on the first mount?
>>>
>>>
>>>
>> I 'll try. This errors occurs only on heavy load.
>> Is there any possibility to simulate I/O errors on XFS filesystem?
>>
> You can use something like a dm-flakey or md-faulty block devices perhaps.
>
Thank you very much. Md-faulty helps to reproduce this problem. I can
repeat this problem after a few mounts.
Simple test script source for reproducing this problem:
#!/bin/bash
mdadm --create -l faulty /dev/md0 -n1 /dev/sda
parted /dev/md0 --script mklabel msdos
parted /dev/md0 --script mkpart primary 1 100
mkfs -t xfs -f -l lazy-count=0 /dev/md0p1
mdadm -G /dev/md0 -l faulty --layout=rt40
mkdir /mnt/test
while true; do
mount -t xfs -o defaults,usrquota,grpquota,nouuid,noatime,nodiratime
/dev/md0p1 /mnt/test
sleep 1
umount /dev/md0p1
done
And a call trace:
[ 994.403980] XFS (md0p1): Mounting Filesystem
[ 994.439392] XFS (md0p1): Ending clean mount
[ 994.439805] XFS (md0p1): Quotacheck needed: Please wait.
[ 994.453523] XFS (md0p1): Quotacheck: Done.
[ 995.494715] XFS (md0p1): Mounting Filesystem
[ 995.495310] XFS (md0p1): metadata I/O error: block 0x1905f
("xlog_bread_noalign") error 5 buf count 512
[ 995.557848] XFS (md0p1): empty log check failed
[ 995.557851] XFS (md0p1): log mount/recovery failed: error 5
[ 995.558284] XFS (md0p1): log mount failed
[ 996.585465] XFS (md0p1): Mounting Filesystem
[ 996.604007] XFS (md0p1): Ending clean mount
[ 997.633585] XFS (md0p1): last sector read failed
[ 998.642557] XFS (md0p1): Mounting Filesystem
[ 998.660927] XFS (md0p1): metadata I/O error: block 0x60
("xfs_trans_read_buf") error 5 buf count 4096
[ 998.723018] XFS (md0p1): Ending clean mount
[ 999.751782] XFS (md0p1): Mounting Filesystem
[ 999.779665] XFS (md0p1): metadata I/O error: block 0x60
("xfs_trans_read_buf") error 5 buf count 4096
[ 999.842094] XFS (md0p1): Ending clean mount
[ 1000.867508] XFS (md0p1): Mounting Filesystem
[ 1000.895354] XFS (md0p1): metadata I/O error: block 0x40
("xfs_trans_read_buf") error 5 buf count 8192
[ 1000.958168] XFS (md0p1): xfs_imap_to_bp: xfs_trans_read_buf()
returned error 5.
[ 1000.958172] XFS (md0p1): failed to read root inode
[ 1000.958240] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0,
file: fs/xfs/xfs_mount.c, line: 272
[ 1001.022557] ------------[ cut here ]------------
[ 1001.054458] kernel BUG at fs/xfs/xfs_message.c:101!
[ 1001.054461] invalid opcode: 0000 [#1] SMP
[ 1001.054464] CPU 1
[ 1001.054465] Modules linked in: iscsi_scst(O) scst_vdisk(O) scst(O)
libcrc32c ext2 drbd(O) iscsi_tcp libiscsi_tcp libiscsi
scsi_transport_iscsi bonding sg e1000e(O) usbserial uhci_hcd ohci_hcd
ehci_hcd aufs [last unloaded: ohci_hcd]
[ 1001.054477]
[ 1001.054479] Pid: 18813, comm: mount Tainted: G O
3.2.42-oe64-00000-gd572330 #1 Supermicro C2SBC-Q/C2SBC-Q
[ 1001.054482] RIP: 0010:[<ffffffff812ff8ed>] [<ffffffff812ff8ed>]
assfail+0x1d/0x30
[ 1001.054488] RSP: 0000:ffff880112849c68 EFLAGS: 00010296
[ 1001.054490] RAX: 0000000000000071 RBX: ffff880108d77840 RCX:
ffff88013b006c00
[ 1001.054491] RDX: 00000000000000c2 RSI: 0000000000000000 RDI:
ffffffff81ded518
[ 1001.054493] RBP: ffff880112849c68 R08: ffff88013b006c00 R09:
ffff88013fd120c0
[ 1001.054494] R10: 0000000000000000 R11: 0000000000000002 R12:
ffff88013988d000
[ 1001.054496] R13: 0000000000000000 R14: ffff88013988d1e8 R15:
ffff88013988d1f8
[ 1001.054498] FS: 0000000000000000(0000) GS:ffff88013fd00000(0063)
knlGS:00000000f75386c0
[ 1001.054499] CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
[ 1001.054501] CR2: 00000000f75f5540 CR3: 0000000135d33000 CR4:
00000000000406e0
[ 1001.054502] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[ 1001.054504] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[ 1001.054506] Process mount (pid: 18813, threadinfo ffff880112848000,
task ffff8801355cb410)
[ 1001.054507] Stack:
[ 1001.054508] ffff880112849ca8 ffffffff8134eb7a 0000000000000005
ffff88013988d000
[ 1001.054510] ffff88013988d018 0000000000000005 ffff88013988d000
0000000000000000
[ 1001.054513] ffff880112849cf8 ffffffff8135054d 0000000000000000
0000000000000000
[ 1001.054515] Call Trace:
[ 1001.054519] [<ffffffff8134eb7a>] xfs_free_perag+0x8a/0xc0
[ 1001.054521] [<ffffffff8135054d>] xfs_mountfs+0x31d/0x700
[ 1001.054524] [<ffffffff81301fab>] xfs_fs_fill_super+0x1cb/0x270
[ 1001.054527] [<ffffffff811476da>] mount_bdev+0x19a/0x1d0
[ 1001.054529] [<ffffffff81301de0>] ? xfs_fs_write_inode+0x180/0x180
[ 1001.054533] [<ffffffff8138b006>] ? selinux_sb_copy_data+0x156/0x1d0
[ 1001.054536] [<ffffffff81300200>] xfs_fs_mount+0x10/0x20
[ 1001.054538] [<ffffffff81146df1>] mount_fs+0x41/0x180
[ 1001.054541] [<ffffffff8115f7ae>] vfs_kern_mount+0x5e/0xc0
[ 1001.054543] [<ffffffff8116075e>] do_kern_mount+0x4e/0x100
[ 1001.054545] [<ffffffff81161f26>] do_mount+0x516/0x740
[ 1001.054548] [<ffffffff811064e9>] ? __get_free_pages+0x9/0x40
[ 1001.054551] [<ffffffff81187cc2>] compat_sys_mount+0xa2/0x220
[ 1001.054554] [<ffffffff8178be43>] ia32_do_call+0x13/0x13
[ 1001.054555] Code: 66 66 90 66 66 66 90 66 66 66 90 66 66 90 55 41 89
d0 48 89 f1 48 89 fa 48 c7 c6 b8 bf 9b 81 31 ff 48 89 e5 31 c0 e8 53 ff
ff ff <0f> 0b eb fe 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 31
[ 1001.054571] RIP [<ffffffff812ff8ed>] assfail+0x1d/0x30
[ 1001.054573] RSP <ffff880112849c68>
[ 1001.054575] ---[ end trace 9fa869a5d6931100 ]---
I also reproduced this call trace on kernel 3.4.39
[ 486.338190] md: bind<sda>
[ 486.342375] bio: create slab <bio-1> at 1
[ 486.342387] md0: detected capacity change from 0 to 500107771904
[ 486.354250] md0: p1
[ 486.709863] XFS (md0p1): Mounting Filesystem
[ 486.738741] XFS (md0p1): Ending clean mount
[ 486.739136] XFS (md0p1): Quotacheck needed: Please wait.
[ 486.767727] XFS (md0p1): Quotacheck: Done.
[ 487.808918] XFS (md0p1): Mounting Filesystem
[ 487.809500] XFS (md0p1): metadata I/O error: block 0x1905f
("xlog_bread_noalign") error 5 buf count 512
[ 487.871589] XFS (md0p1): empty log check failed
[ 487.871591] XFS (md0p1): log mount/recovery failed: error 5
[ 487.871667] XFS (md0p1): log mount failed
[ 488.899680] XFS (md0p1): Mounting Filesystem
[ 488.917484] XFS (md0p1): Ending clean mount
[ 489.943430] XFS (md0p1): last sector read failed
[ 490.957577] XFS (md0p1): Mounting Filesystem
[ 490.975022] XFS (md0p1): metadata I/O error: block 0x60
("xfs_trans_read_buf") error 5 buf count 4096
[ 491.037209] XFS (md0p1): Ending clean mount
[ 494.478260] Buffer I/O error on device md0p1, logical block 23
[ 494.479204] Buffer I/O error on device md0p1, logical block 63
[ 494.479540] Buffer I/O error on device md0p1, logical block 15
[ 498.577283] XFS (md0p1): Mounting Filesystem
[ 498.604225] XFS (md0p1): metadata I/O error: block 0x60
("xfs_trans_read_buf") error 5 buf count 4096
[ 498.666825] XFS (md0p1): Ending clean mount
[ 499.693310] XFS (md0p1): Mounting Filesystem
[ 499.719749] XFS (md0p1): metadata I/O error: block 0x40
("xfs_trans_read_buf") error 5 buf count 8192
[ 499.782405] XFS (md0p1): xfs_imap_to_bp: xfs_trans_read_buf()
returned error 5.
[ 499.782409] XFS (md0p1): failed to read root inode
[ 499.782482] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0,
file: fs/xfs/xfs_mount.c, line: 272
[ 499.846615] ------------[ cut here ]------------
[ 499.878410] kernel BUG at fs/xfs/xfs_message.c:101!
[ 499.878412] invalid opcode: 0000 [#1] SMP
[ 499.878415] CPU 0
[ 499.878416] Modules linked in: iscsi_scst(O) scst_vdisk(O) scst(O)
libcrc32c ext2 drbd(O) iscsi_tcp libiscsi_tcp libiscsi
scsi_transport_iscsi bonding sg e1000e(O) usbserial uhci_hcd ohci_hcd
ehci_hcd aufs [last unloaded: ohci_hcd]
[ 499.878426]
[ 499.878428] Pid: 17632, comm: mount Tainted: G O
3.4.39-oe64-00000-g8b0d7e5 #9 Supermicro C2SBC-Q/C2SBC-Q
[ 499.878431] RIP: 0010:[<ffffffff812fe68d>] [<ffffffff812fe68d>]
assfail+0x1d/0x30
[ 499.878437] RSP: 0000:ffff880135381c78 EFLAGS: 00010296
[ 499.878439] RAX: 0000000000000071 RBX: ffff88013a5c0540 RCX:
ffff88013b006c00
[ 499.878440] RDX: 00000000000000eb RSI: 0000000000000046 RDI:
0000000000000000
[ 499.878442] RBP: ffff880135381c78 R08: ffff88013b006c00 R09:
ffff88013fc12440
[ 499.878443] R10: ffff88013fd124e8 R11: 0000000000000000 R12:
ffff880139736000
[ 499.878445] R13: 0000000000000000 R14: ffff8801397361e8 R15:
ffff8801397361f8
[ 499.878446] FS: 0000000000000000(0000) GS:ffff88013fc00000(0063)
knlGS:00000000f75146c0
[ 499.878448] CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
[ 499.878449] CR2: 00000000f7751181 CR3: 00000001359b4000 CR4:
00000000000407f0
[ 499.878451] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[ 499.878452] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[ 499.878454] Process mount (pid: 17632, threadinfo ffff880135380000,
task ffff880135005120)
[ 499.878455] Stack:
[ 499.878456] ffff880135381cb8 ffffffff8134e68b 0000000000000005
ffff880139736000
[ 499.878458] ffff880139736018 0000000000000005 ffff880139736000
ffff8801352a9c00
[ 499.878461] ffff880135381d08 ffffffff8135008d 0000000000000000
0000000000000000
[ 499.878463] Call Trace:
[ 499.878466] [<ffffffff8134e68b>] xfs_free_perag+0x8b/0xc0
[ 499.878469] [<ffffffff8135008d>] xfs_mountfs+0x31d/0x700
[ 499.878471] [<ffffffff81300dd0>] xfs_fs_fill_super+0x1e0/0x280
[ 499.878474] [<ffffffff8114dcab>] mount_bdev+0x19b/0x1d0
[ 499.878476] [<ffffffff81300bf0>] ? xfs_fs_evict_inode+0x130/0x130
[ 499.878479] [<ffffffff81388f46>] ? selinux_sb_copy_data+0x156/0x1d0
[ 499.878481] [<ffffffff812ff050>] xfs_fs_mount+0x10/0x20
[ 499.878483] [<ffffffff8114d341>] mount_fs+0x41/0x180
[ 499.878486] [<ffffffff81166f99>] vfs_kern_mount+0x69/0xf0
[ 499.878488] [<ffffffff811670ae>] do_kern_mount+0x4e/0x100
[ 499.878490] [<ffffffff8116805a>] do_mount+0x51a/0x760
[ 499.878492] [<ffffffff8110bee9>] ? __get_free_pages+0x9/0x40
[ 499.878496] [<ffffffff8118ee32>] compat_sys_mount+0xa2/0x220
[ 499.878498] [<ffffffff8108694f>] ? sys_rt_sigprocmask+0xbf/0xd0
[ 499.878501] [<ffffffff81794169>] ia32_do_call+0x13/0x13
[ 499.878502] Code: 66 66 90 66 66 66 90 66 66 66 90 66 66 90 55 41 89
d0 48 89 f1 48 89 fa 48 c7 c6 d0 cb 9b 81 31 ff 48 89 e5 31 c0 e8 53 ff
ff ff <0f> 0b eb fe 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 31
[ 499.878518] RIP [<ffffffff812fe68d>] assfail+0x1d/0x30
[ 499.878520] RSP <ffff880135381c78>
[ 499.878522] ---[ end trace 37e88031b68311f3 ]---
--
Best regards
Arkadiusz Bubała
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2013-05-27 12:41 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-05-21 12:23 [BUG] XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 272 Arkadiusz Bubała
2013-05-21 12:31 ` Arkadiusz Bubała
2013-05-21 18:26 ` Ben Myers
2013-05-22 8:06 ` Arkadiusz Bubała
2013-05-21 23:39 ` Dave Chinner
2013-05-22 8:11 ` Arkadiusz Bubała
2013-05-22 14:04 ` Eric Sandeen
2013-05-27 12:40 ` Arkadiusz Bubała
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox