* Marking inode dirty latency > 1000 msec on XFS!
@ 2008-02-15 18:44 Török Edwin
2008-02-22 6:31 ` Lachlan McIlroy
0 siblings, 1 reply; 8+ messages in thread
From: Török Edwin @ 2008-02-15 18:44 UTC (permalink / raw)
To: Arjan van de Ven; +Cc: xfs
Hi,
Using LatencyTOP 0.3, on the latest 2.6.25-git I see latencies of over a
second on __mark_ inode_dirty.
[see a stacktrace at end of this email]
I tried to locate xfs's implementation of super_operations.dirty_inode,
but it isn't specified in xfs_super.c.
I don't know how mark_inode_dirty ends up calling xfs_trans_commit, but
is it required to commit the dirty status of an inode to the transaction
log?
FWIW, this is a slow laptop hdd (5400 rpm, ST96812AS), but latency of 1
second is still big.
Are there any settings I can tweak to reduce latency?
LatencyTOP output during a 'svn up' on llvm-gcc source tree:
Cause Maximum Percentage
Marking inode dirty 1105.8 msec 7.8 %
_xfs_buf_ioapply default_wake_function xlog_state_1065.2 msec 7.0 %
Deleting an inode 964.8 msec 20.0 %
_xfs_buf_ioapply default_wake_function xlog_state_780.1 msec 8.3 %
_xfs_buf_ioapply default_wake_function xlog_state_679.4 msec 3.3 %
_xfs_buf_ioapply default_wake_function xlog_state_610.1 msec 5.6 %
XFS I/O wait 585.9 msec 12.6 %
_xfs_buf_ioapply default_wake_function xlog_state_528.8 msec 6.8 %
Creating block layer request 499.6 msec 5.7 %
Earlier I've seen this latencyTOP output too:
Cause Maximum Percentage
XFS I/O wait 407.6 msec 53.4 %
Marking inode dirty 173.0 msec 0.9 %
Writing a page to disk 141.6 msec 42.6 %
__generic_unplug_device default_wake_function xfs_ 86.0 msec 0.3 %
Page fault 44.1 msec 0.2 %
kobject_put put_device blk_start_queueing __generi 15.9 msec 0.1 %
Scheduler: waiting for cpubuf_find kmem_zone_alloc 12.4 msec 2.2 %
put_device scsi_request_fn blk_start_queueing defa 4.9 msec 0.0 %
Waiting for event (poll) 4.7 msec 0.4 %
Process svn (10685)
Writing a page to disk 23.9 msec 55.9 %
XFS I/O wait 15.9 msec 35.2 %
Scheduler: waiting for cpu 0.8 msec 8.9 %
Raw output from /proc/latency shows stacktrace:
7 93862 26567 _xfs_buf_ioapply default_wake_function
xlog_state_get_iclog_space xlog_state_release_iclog xlog_write
xfs_log_write _xfs_trans_commit __mark_inode_dirty igrab xfs_create
xfs_vn_mknod security_inode_permission
1 96331 96331 default_wake_function xlog_state_get_iclog_space
xlog_state_release_iclog xlog_write xfs_log_write _xfs_trans_commit
__mark_inode_dirty igrab xfs_create xfs_vn_mknod
security_inode_permission xfs_vn_permission
Best regards,
--Edwin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Marking inode dirty latency > 1000 msec on XFS!
2008-02-15 18:44 Marking inode dirty latency > 1000 msec on XFS! Török Edwin
@ 2008-02-22 6:31 ` Lachlan McIlroy
2008-02-22 7:16 ` David Chinner
2008-02-22 8:59 ` Török Edwin
0 siblings, 2 replies; 8+ messages in thread
From: Lachlan McIlroy @ 2008-02-22 6:31 UTC (permalink / raw)
To: Török Edwin; +Cc: Arjan van de Ven, xfs
Török Edwin wrote:
> Hi,
>
> Using LatencyTOP 0.3, on the latest 2.6.25-git I see latencies of over a
> second on __mark_ inode_dirty.
These are the maximum latencies, right? What would be useful here is the
average latency time. The average might actually be quite low but if just
once we have a maximum that is unusually large then just looking at that
figure can be misleading.
> [see a stacktrace at end of this email]
>
> I tried to locate xfs's implementation of super_operations.dirty_inode,
> but it isn't specified in xfs_super.c.
Yeah, we don't currently have one.
> I don't know how mark_inode_dirty ends up calling xfs_trans_commit, but
It doesn't, but xfs_trans_commit() does eventually call mark_inode_dirty_sync()
through the IOP_FORMAT() log item operation. If we are committing a transaction
that involves an inode then we must have just modified the inode so this is a
good time to mark it dirty so that it gets written out to disk later.
> is it required to commit the dirty status of an inode to the transaction
> log?
No, not the dirty status - just the changes that made it dirty (actually the
dirty status is in the Linux inode and we commit the xfs inode to the log).
>
> FWIW, this is a slow laptop hdd (5400 rpm, ST96812AS), but latency of 1
> second is still big.
>
> Are there any settings I can tweak to reduce latency?
Um, not that I am aware of.
>
> LatencyTOP output during a 'svn up' on llvm-gcc source tree:
>
> Cause Maximum Percentage
> Marking inode dirty 1105.8 msec 7.8 %
> _xfs_buf_ioapply default_wake_function xlog_state_1065.2 msec 7.0 %
> Deleting an inode 964.8 msec 20.0 %
> _xfs_buf_ioapply default_wake_function xlog_state_780.1 msec 8.3 %
> _xfs_buf_ioapply default_wake_function xlog_state_679.4 msec 3.3 %
> _xfs_buf_ioapply default_wake_function xlog_state_610.1 msec 5.6 %
> XFS I/O wait 585.9 msec 12.6 %
> _xfs_buf_ioapply default_wake_function xlog_state_528.8 msec 6.8 %
> Creating block layer request 499.6 msec 5.7 %
>
> Earlier I've seen this latencyTOP output too:
>
> Cause Maximum Percentage
> XFS I/O wait 407.6 msec 53.4 %
> Marking inode dirty 173.0 msec 0.9 %
> Writing a page to disk 141.6 msec 42.6 %
> __generic_unplug_device default_wake_function xfs_ 86.0 msec 0.3 %
> Page fault 44.1 msec 0.2 %
> kobject_put put_device blk_start_queueing __generi 15.9 msec 0.1 %
> Scheduler: waiting for cpubuf_find kmem_zone_alloc 12.4 msec 2.2 %
> put_device scsi_request_fn blk_start_queueing defa 4.9 msec 0.0 %
> Waiting for event (poll) 4.7 msec 0.4 %
>
> Process svn (10685)
> Writing a page to disk 23.9 msec 55.9 %
> XFS I/O wait 15.9 msec 35.2 %
> Scheduler: waiting for cpu 0.8 msec 8.9 %
>
> Raw output from /proc/latency shows stacktrace:
>
> 7 93862 26567 _xfs_buf_ioapply default_wake_function
> xlog_state_get_iclog_space xlog_state_release_iclog xlog_write
> xfs_log_write _xfs_trans_commit __mark_inode_dirty igrab xfs_create
> xfs_vn_mknod security_inode_permission
> 1 96331 96331 default_wake_function xlog_state_get_iclog_space
> xlog_state_release_iclog xlog_write xfs_log_write _xfs_trans_commit
> __mark_inode_dirty igrab xfs_create xfs_vn_mknod
> security_inode_permission xfs_vn_permission
>
These don't look like valid stacktraces - are you sure they aren't just
the major offenders for latency delays?
Lachlan
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Marking inode dirty latency > 1000 msec on XFS!
2008-02-22 6:31 ` Lachlan McIlroy
@ 2008-02-22 7:16 ` David Chinner
2008-02-22 8:40 ` Török Edwin
2008-02-22 8:59 ` Török Edwin
1 sibling, 1 reply; 8+ messages in thread
From: David Chinner @ 2008-02-22 7:16 UTC (permalink / raw)
To: Lachlan McIlroy; +Cc: Török Edwin, Arjan van de Ven, xfs
On Fri, Feb 22, 2008 at 05:31:56PM +1100, Lachlan McIlroy wrote:
> Török Edwin wrote:
> >FWIW, this is a slow laptop hdd (5400 rpm, ST96812AS), but latency of 1
> >second is still big.
> >
> >Are there any settings I can tweak to reduce latency?
> Um, not that I am aware of.
Try using a filesystem that has lazy-count enabled.
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Marking inode dirty latency > 1000 msec on XFS!
2008-02-22 7:16 ` David Chinner
@ 2008-02-22 8:40 ` Török Edwin
0 siblings, 0 replies; 8+ messages in thread
From: Török Edwin @ 2008-02-22 8:40 UTC (permalink / raw)
To: David Chinner; +Cc: Lachlan McIlroy, Arjan van de Ven, xfs
David Chinner wrote:
> On Fri, Feb 22, 2008 at 05:31:56PM +1100, Lachlan McIlroy wrote:
>
>> Török Edwin wrote:
>>
>>> FWIW, this is a slow laptop hdd (5400 rpm, ST96812AS), but latency of 1
>>> second is still big.
>>>
>>> Are there any settings I can tweak to reduce latency?
>>>
>> Um, not that I am aware of.
>>
>
> Try using a filesystem that has lazy-count enabled.
How do I activate lazy-count on an already existing filesystem?
I tried using xfs_db, but AFAICT it cannot activate features2 flags.
I am using 2.9.6-1, should I checkout xfsprogs from CVS?
Thanks,
--Edwin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Marking inode dirty latency > 1000 msec on XFS!
2008-02-22 6:31 ` Lachlan McIlroy
2008-02-22 7:16 ` David Chinner
@ 2008-02-22 8:59 ` Török Edwin
2008-02-22 10:20 ` Török Edwin
1 sibling, 1 reply; 8+ messages in thread
From: Török Edwin @ 2008-02-22 8:59 UTC (permalink / raw)
To: lachlan; +Cc: Arjan van de Ven, xfs
Lachlan McIlroy wrote:
> Török Edwin wrote:
>> Hi,
>>
>> Using LatencyTOP 0.3, on the latest 2.6.25-git I see latencies of over a
>> second on __mark_ inode_dirty.
> These are the maximum latencies, right?
Yes.
> What would be useful here is the
> average latency time. The average might actually be quite low but if
> just
> once we have a maximum that is unusually large then just looking at that
> figure can be misleading.
I'll try to collect the raw numbers from /proc/latency_stats, that
contain a count, total time, and max time.
>> I don't know how mark_inode_dirty ends up calling xfs_trans_commit, but
> It doesn't, but xfs_trans_commit() does eventually call
> mark_inode_dirty_sync()
> through the IOP_FORMAT() log item operation. If we are committing a
> transaction
> that involves an inode then we must have just modified the inode so
> this is a
> good time to mark it dirty so that it gets written out to disk later.
Ok, then I misinterpreted the stacktrace, see below.
[@Arjan: are the stacktraces below valid?]
I thought mark_inode_dirty calls xfs_trans_commit().
>>
>> 7 93862 26567 _xfs_buf_ioapply default_wake_function
>> xlog_state_get_iclog_space xlog_state_release_iclog xlog_write
>> xfs_log_write _xfs_trans_commit __mark_inode_dirty igrab xfs_create
>> xfs_vn_mknod security_inode_permission
>> 1 96331 96331 default_wake_function xlog_state_get_iclog_space
>> xlog_state_release_iclog xlog_write xfs_log_write _xfs_trans_commit
>> __mark_inode_dirty igrab xfs_create xfs_vn_mknod
>> security_inode_permission xfs_vn_permission
>>
> These don't look like valid stacktraces - are you sure they aren't just
> the major offenders for latency delays?
Looking at this entry, I assumed stacktraces are meant to be interpreted
right to left (sys_read calls do_sync_read, etc.)
4 8 2 hrtick_set pipe_wait autoremove_wake_function pipe_read
do_sync_read autoremove_wake_function __resched_task
security_file_permission rw_verify_area vfs_read do_sync_read sys_read
However here, if _xfs_trans_commit is calling __mark_inode_dirty, I am
confused.
Also we have default_wake_function here, does that somehow invalidate
the stacktrace, or change the direction I should be reading it? Arjan?
7 93862 26567 _xfs_buf_ioapply default_wake_function
xlog_state_get_iclog_space xlog_state_release_iclog xlog_write
xfs_log_write _xfs_trans_commit __mark_inode_dirty igrab xfs_create
xfs_vn_mknod security_inode_permission
Best regards,
--Edwin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Marking inode dirty latency > 1000 msec on XFS!
2008-02-22 8:59 ` Török Edwin
@ 2008-02-22 10:20 ` Török Edwin
2008-02-23 0:06 ` David Chinner
0 siblings, 1 reply; 8+ messages in thread
From: Török Edwin @ 2008-02-22 10:20 UTC (permalink / raw)
To: lachlan; +Cc: Arjan van de Ven, xfs, David Chinner
Török Edwin wrote:
>> What would be useful here is the
>> average latency time. The average might actually be quite low but if
>> just
>> once we have a maximum that is unusually large then just looking at that
>> figure can be misleading.
>>
>
> I'll try to collect the raw numbers from /proc/latency_stats, that
> contain a count, total time, and max time.
I was not able to reproduce the 1 second latency with David Chinner's
reduce xaild wakeups patch, the maximum latency I got was 685 msec.
See below the numbers I got from /proc/latency_stats, the counters are
reset every 30 seconds, so I did more captures, and noted the top 3 (for
the 2nd case).
Notice the irq_exit in the first case (w/o patch), which is present in
the top latencies. There is no irq_exit in the second case.
@Arjan: I am not sure if stacktraces containing irq_exit() are entirely
valid, or the stacktrace should be stopped when it is encountered.
Here are the numbers w/o the wakeups patch (1 capture of grep mark_inode
/proc/latency_stats)
<count> <sum> <maximum> <stacktrace>
----------------
2 47699 36897 xfs_buf_free default_wake_function xfs_buf_lock xfs_getsb
xfs_trans_getsb xfs_trans_apply_sb_deltas _xfs_trans_commit
xfs_dir_createname kmem_zone_alloc __mark_inode_dirty igrab xfs_create
1 1045646 1045646 wake_up_state default_wake_function xfs_buf_lock
xfs_getsb xfs_trans_getsb xfs_trans_apply_sb_deltas _xfs_trans_commit
irq_exit xfs_dir_createname kmem_zone_alloc __mark_inode_dirty igrab
1 818946 818946 default_wake_function xfs_buf_lock xfs_getsb
xfs_trans_getsb xfs_trans_apply_sb_deltas _xfs_trans_commit irq_exit
xfs_dir_createname kmem_zone_alloc __mark_inode_dirty igrab xfs_create
Average = 478072 usecs
-----------------
Here are the numbers w/ the wakeups patch:(3 captures of grep mark_inode
/proc/latency_stats)
<count> <sum> <maximum> <stacktrace>
-----------
1 685021 685021 xfs_buf_free default_wake_function xfs_buf_lock
xfs_getsb xfs_trans_getsb xfs_trans_apply_sb_deltas _xfs_trans_commit
xfs_dir_createname xfs_dir_ialloc __mark_inode_dirty igrab xfs_create
Only one sample, average=max
----------
20 153016 8989 _xfs_buf_ioapply default_wake_function
xlog_state_get_iclog_space xlog_state_release_iclog xlog_write
xfs_log_write _xfs_trans_commit __mark_inode_dirty igrab xfs_create
xfs_vn_mknod security_inode_permission
Average: 153016/20 = 7650 usecs
----------
28 250553 28743 _xfs_buf_ioapply default_wake_function
xlog_state_get_iclog_space xlog_state_release_iclog xlog_write
xfs_log_write _xfs_trans_commit __mark_inode_dirty igrab xfs_create
xfs_vn_mknod security_inode_permission
4 32829 8843 _xfs_buf_ioapply default_wake_function
xlog_state_get_iclog_space xlog_state_release_iclog xlog_state_want_sync
xlog_write xfs_log_write _xfs_trans_commit __mark_inode_dirty igrab
xfs_create xfs_vn_mknod
1 7554 7554 _xfs_buf_ioapply default_wake_function
xlog_state_get_iclog_space xlog_state_release_iclog xlog_write
xfs_log_write _xfs_trans_commit xfs_bmap_last_offset xfs_dir_createname
xfs_dir_ialloc __mark_inode_dirty igrab
1 7011 7011 _xfs_buf_ioapply xfs_buf_iorequest xlog_bdstrat_cb
default_wake_function xlog_state_get_iclog_space xlog_write
__mark_inode_dirty kmem_free xfs_buf_item_format xfs_log_write
_xfs_trans_commit igrab
28 250553 28743 _xfs_buf_ioapply default_wake_function
xlog_state_get_iclog_space xlog_state_release_iclog xlog_write
xfs_log_write _xfs_trans_commit __mark_inode_dirty igrab xfs_create
xfs_vn_mknod security_inode_permission
4 32829 8843 _xfs_buf_ioapply default_wake_function
xlog_state_get_iclog_space xlog_state_release_iclog xlog_state_want_sync
xlog_write xfs_log_write _xfs_trans_commit __mark_inode_dirty igrab
xfs_create xfs_vn_mknod
1 7554 7554 _xfs_buf_ioapply default_wake_function
xlog_state_get_iclog_space xlog_state_release_iclog xlog_write
xfs_log_write _xfs_trans_commit xfs_bmap_last_offset xfs_dir_createname
xfs_dir_ialloc __mark_inode_dirty igrab
1 7011 7011 _xfs_buf_ioapply xfs_buf_iorequest xlog_bdstrat_cb
default_wake_function xlog_state_get_iclog_space xlog_write
__mark_inode_dirty kmem_free xfs_buf_item_format xfs_log_write
_xfs_trans_commit igra
Average = 8762 usecs
Best regards,
--Edwin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Marking inode dirty latency > 1000 msec on XFS!
2008-02-22 10:20 ` Török Edwin
@ 2008-02-23 0:06 ` David Chinner
2008-02-23 9:41 ` Török Edwin
0 siblings, 1 reply; 8+ messages in thread
From: David Chinner @ 2008-02-23 0:06 UTC (permalink / raw)
To: Török Edwin; +Cc: lachlan, Arjan van de Ven, xfs, David Chinner
On Fri, Feb 22, 2008 at 12:20:23PM +0200, Török Edwin wrote:
> Török Edwin wrote:
> >> What would be useful here is the
> >> average latency time. The average might actually be quite low but if
> >> just
> >> once we have a maximum that is unusually large then just looking at that
> >> figure can be misleading.
> >>
> >
> > I'll try to collect the raw numbers from /proc/latency_stats, that
> > contain a count, total time, and max time.
>
> I was not able to reproduce the 1 second latency with David Chinner's
> reduce xaild wakeups patch, the maximum latency I got was 685 msec.
.....
>
> <count> <sum> <maximum> <stacktrace>
> ----------------
> 2 47699 36897 xfs_buf_free default_wake_function xfs_buf_lock xfs_getsb
....
>
> Average = 478072 usecs
> -----------------
.....
> -----------
> 1 685021 685021 xfs_buf_free default_wake_function xfs_buf_lock
> xfs_getsb xfs_trans_getsb xfs_trans_apply_sb_deltas _xfs_trans_commit
......
Note the xfs_getsb() call in there - that's what the lazy-count option
avoids. That's waiting in the transaction subsystem to apply delta's
to the superblock that is currently locked.
Converting to a lazy-count filesystem is experimental right now;
it may eat your data. If you still want to try, apply the patch
in this email and convert it:
http://oss.sgi.com/archives/xfs/2008-02/msg00295.html
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Marking inode dirty latency > 1000 msec on XFS!
2008-02-23 0:06 ` David Chinner
@ 2008-02-23 9:41 ` Török Edwin
0 siblings, 0 replies; 8+ messages in thread
From: Török Edwin @ 2008-02-23 9:41 UTC (permalink / raw)
To: David Chinner; +Cc: lachlan, Arjan van de Ven, xfs
[-- Attachment #1: Type: text/plain, Size: 9392 bytes --]
David Chinner wrote:
> Note the xfs_getsb() call in there - that's what the lazy-count option
> avoids. That's waiting in the transaction subsystem to apply delta's
> to the superblock that is currently locked.
>
After enabling lazy-count the max latency is 223 msec, average is 19 msec.
1 0 0 add_wait_queue __pollwait datagram_poll do_select __pollwait
default_wake_function __make_request __make_request __mark_inode_dirty
find_busiest_group __set_page_dirty mark_buffer_dirty
2 0 0 add_wait_queue __pollwait datagram_poll do_select __pollwait
default_wake_function read_hpet getnstimeofday ktime_get_ts ktime_get
__mark_inode_dirty __set_page_dirty
28 563051 223732 _xfs_buf_ioapply default_wake_function
xlog_state_get_iclog_space xlog_state_release_iclog xlog_write
xfs_log_write _xfs_trans_commit __mark_inode_dirty igrab xfs_create
xfs_vn_mknod security_inode_permission
1 7716 7716 xfs_buf_get_flags default_wake_function
xlog_state_get_iclog_space xlog_write __mark_inode_dirty kmem_free
xfs_log_write _xfs_trans_commit igrab xfs_create xfs_vn_mknod
security_inode_permission
My "testcase" to reproduce this is:
I have an SVN checkout from here:
http://llvm.org/svn/llvm-project/llvm-gcc-4.2/trunk
I do:
# echo 3 >/proc/sys/vm/drop_caches
$ svn up
$ svn up; svn up; svn up; svn up;
^^^^^the latency shows up here
svn is version 1.4.6
$ mount|grep sda6
/dev/sda6 on / type xfs (rw, noatime, nodiratime, logbufs=8, logbsize=256k)
> Converting to a lazy-count filesystem is experimental right now;
>
I got a build failure, and when I solved that a segmentation fault.
Luckily it didn't start modifying the partition when that happened, it
was during command-line argument parsing.
When I fixed that it went fine, some files got linked to lost+found,
thats all.
[See log at end of email.]
> http://oss.sgi.com/archives/xfs/2008-02/msg00295.html
>
Am I the first to test that patch?
Attached the fixed patch I used (renamed 'sbp' to 'sb', and use &val
instead of *val for getsubopt):
Here's the output of xfs_repair, and xfs_info (/dev/sda6 is my / for
Debian, so I've run xfs_repair from Fedora installed on same laptop).
[root@localhost repair]# ./xfs_repair -c lazycount=1 /dev/sda6
Phase 1 - find and verify superblock...
- Enabling lazy-counters
writing modified primary superblock
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
error following ag 4 unlinked list
error following ag 9 unlinked list
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
b5eb2b90: Badness in key lookup (length)
bp=(bno 3666392, len 16384 bytes) key=(bno 3666392, len 8192 bytes)
- agno = 3
b54b1b90: Badness in key lookup (length)
bp=(bno 4884000, len 16384 bytes) key=(bno 4884000, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 4885232, len 16384 bytes) key=(bno 4885232, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 4892544, len 16384 bytes) key=(bno 4892544, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 4894976, len 16384 bytes) key=(bno 4894976, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 4897280, len 16384 bytes) key=(bno 4897280, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 4897328, len 16384 bytes) key=(bno 4897328, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 4897440, len 16384 bytes) key=(bno 4897440, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 4901504, len 16384 bytes) key=(bno 4901504, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 4913008, len 16384 bytes) key=(bno 4913008, len 8192 bytes)
- agno = 4
b28ffb90: Badness in key lookup (length)
bp=(bno 6155128, len 16384 bytes) key=(bno 6155128, len 8192 bytes)
- agno = 5
- agno = 6
- agno = 7
b28ffb90: Badness in key lookup (length)
bp=(bno 9788032, len 16384 bytes) key=(bno 9788032, len 8192 bytes)
- agno = 8
b54b1b90: Badness in key lookup (length)
bp=(bno 10988488, len 16384 bytes) key=(bno 10988488, len 8192 bytes)
- agno = 9
b54b1b90: Badness in key lookup (length)
bp=(bno 10988632, len 16384 bytes) key=(bno 10988632, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 10991688, len 16384 bytes) key=(bno 10991688, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 11004840, len 16384 bytes) key=(bno 11004840, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 11024648, len 16384 bytes) key=(bno 11024648, len 8192 bytes)
b54b1b90: Badness in key lookup (length)
bp=(bno 11163416, len 16384 bytes) key=(bno 11163416, len 8192 bytes)
- agno = 10
b28ffb90: Badness in key lookup (length)
bp=(bno 13436152, len 16384 bytes) key=(bno 13436152, len 8192 bytes)
- agno = 11
- agno = 12
- agno = 13
b3fffb90: Badness in key lookup (length)
bp=(bno 15895144, len 16384 bytes) key=(bno 15895144, len 8192 bytes)
b28ffb90: Badness in key lookup (length)
bp=(bno 17122320, len 16384 bytes) key=(bno 17122320, len 8192 bytes)
- agno = 14
- agno = 15
b54b1b90: Badness in key lookup (length)
bp=(bno 19270440, len 16384 bytes) key=(bno 19270440, len 8192 bytes)
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 1
- agno = 0
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
disconnected dir inode 12590082, moving to lost+found
disconnected inode 16777746, moving to lost+found
disconnected inode 16777747, moving to lost+found
disconnected inode 16780239, moving to lost+found
disconnected inode 16794857, moving to lost+found
disconnected inode 16794858, moving to lost+found
disconnected inode 16799733, moving to lost+found
disconnected inode 16799736, moving to lost+found
disconnected dir inode 16804337, moving to lost+found
disconnected inode 16804339, moving to lost+found
disconnected inode 16804430, moving to lost+found
disconnected inode 16804657, moving to lost+found
disconnected inode 16804660, moving to lost+found
disconnected inode 16812738, moving to lost+found
disconnected inode 16835764, moving to lost+found
disconnected inode 16835765, moving to lost+found
disconnected inode 16835766, moving to lost+found
disconnected inode 16835767, moving to lost+found
disconnected dir inode 21072429, moving to lost+found
disconnected inode 33595557, moving to lost+found
disconnected inode 37748903, moving to lost+found
disconnected inode 37748913, moving to lost+found
disconnected inode 37749200, moving to lost+found
disconnected inode 37749210, moving to lost+found
disconnected inode 37749214, moving to lost+found
disconnected inode 37755288, moving to lost+found
disconnected inode 37781598, moving to lost+found
disconnected inode 37781599, moving to lost+found
disconnected inode 37781600, moving to lost+found
disconnected inode 37781601, moving to lost+found
disconnected inode 37821186, moving to lost+found
disconnected inode 37821190, moving to lost+found
disconnected inode 38098758, moving to lost+found
disconnected inode 38098759, moving to lost+found
disconnected inode 38098760, moving to lost+found
disconnected inode 38098762, moving to lost+found
disconnected dir inode 46149094, moving to lost+found
disconnected dir inode 54571931, moving to lost+found
disconnected inode 54571932, moving to lost+found
disconnected inode 54571935, moving to lost+found
disconnected inode 54571936, moving to lost+found
disconnected inode 54571938, moving to lost+found
disconnected inode 58778689, moving to lost+found
disconnected inode 58778691, moving to lost+found
disconnected inode 58778692, moving to lost+found
disconnected inode 58778693, moving to lost+found
disconnected inode 64827392, moving to lost+found
Phase 7 - verify and correct link counts...
resetting inode 12590082 nlinks from 0 to 2
resetting inode 16804337 nlinks from 0 to 2
resetting inode 21072429 nlinks from 0 to 2
resetting inode 46149094 nlinks from 0 to 2
resetting inode 54571931 nlinks from 0 to 2
done
# xfs_info /dev/sda6
meta-data=/dev/sda6 isize=256 agcount=16, agsize=152617 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=2441872, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=65536 blocks=0, rtextents=0
Best regards,
--Edwin
[-- Attachment #2: xfs_repair_working.patch --]
[-- Type: text/x-diff, Size: 3421 bytes --]
Index: globals.h
===================================================================
RCS file: /cvs/xfs-cmds/xfsprogs/repair/globals.h,v
retrieving revision 1.21
diff -u -r1.21 globals.h
--- globals.h 16 Jul 2007 15:55:26 -0000 1.21
+++ globals.h 23 Feb 2008 09:09:40 -0000
@@ -116,6 +116,8 @@
EXTERN int log_spec; /* Log dev specified as option */
EXTERN char *rt_name; /* Name of realtime device */
EXTERN int rt_spec; /* Realtime dev specified as option */
+EXTERN int convert_lazy_count; /* Convert lazy-count mode on/off */
+EXTERN int lazy_count; /* What to set if to if converting */
/* misc status variables */
Index: phase1.c
===================================================================
RCS file: /cvs/xfs-cmds/xfsprogs/repair/phase1.c,v
retrieving revision 1.11
diff -u -r1.11 phase1.c
--- phase1.c 16 Jul 2007 15:53:42 -0000 1.11
+++ phase1.c 23 Feb 2008 09:09:40 -0000
@@ -91,6 +91,26 @@
primary_sb_modified = 1;
}
+ /*
+ * apply any version changes or conversions after the primary
+ * superblock has been verified or repaired
+ */
+ if (convert_lazy_count) {
+ if (lazy_count && !xfs_sb_version_haslazysbcount(sb)) {
+ sb->sb_versionnum |= XFS_SB_VERSION_MOREBITSBIT;
+ sb->sb_features2 |= XFS_SB_VERSION2_LAZYSBCOUNTBIT;
+ primary_sb_modified = 1;
+ do_log(_(" - Enabling lazy-counters\n"));
+ } else
+ if (!lazy_count && xfs_sb_version_haslazysbcount(sb)) {
+ sb->sb_features2 &= ~XFS_SB_VERSION2_LAZYSBCOUNTBIT;
+ do_log(_(" - Disabling lazy-counters\n"));
+ primary_sb_modified = 1;
+ } else
+ do_log(_(" - Lazy-counters are already %s\n"),
+ lazy_count ? _("enabled") : _("disabled"));
+ }
+
if (primary_sb_modified) {
if (!no_modify) {
do_warn(_("writing modified primary superblock\n"));
Index: xfs_repair.c
===================================================================
RCS file: /cvs/xfs-cmds/xfsprogs/repair/xfs_repair.c,v
retrieving revision 1.31
diff -u -r1.31 xfs_repair.c
--- xfs_repair.c 19 Jul 2007 16:04:12 -0000 1.31
+++ xfs_repair.c 23 Feb 2008 09:09:43 -0000
@@ -47,7 +47,7 @@
*/
/*
- * -o (user-supplied override options)
+ * -o: user-supplied override options
*/
char *o_opts[] = {
@@ -64,15 +64,25 @@
NULL
};
+/*
+ * -c: conversion options
+ */
+
+char *c_opts[] = {
+#define CONVERT_LAZY_COUNT 0
+ "lazycount",
+ NULL
+};
+
+
static int ihash_option_used;
static int bhash_option_used;
static void
usage(void)
{
- do_warn(
-_("Usage: %s [-nLvV] [-o subopt[=value]] [-l logdev] [-r rtdev] devname\n"),
- progname);
+ do_warn(_("Usage: %s [-nLPvV] [-c subopt=value] [-o subopt[=value]] "
+ "[-l logdev] [-r rtdev] devname\n"), progname);
exit(1);
}
@@ -191,7 +201,7 @@
* XXX have to add suboption processing here
* attributes, quotas, nlinks, aligned_inos, sb_fbits
*/
- while ((c = getopt(argc, argv, "o:fl:r:LnDvVdPMt:")) != EOF) {
+ while ((c = getopt(argc, argv, "c:o:fl:r:LnDvVdPMt:")) != EOF) {
switch (c) {
case 'D':
dumpcore = 1;
@@ -234,6 +244,22 @@
}
}
break;
+ case 'c':
+ p = optarg;
+ while (*p) {
+ char *val;
+
+ switch (getsubopt(&p, (constpp)c_opts, &val)) {
+ case CONVERT_LAZY_COUNT:
+ lazy_count = (int)strtol(val, 0, 0);
+ convert_lazy_count = 1;
+ break;
+ default:
+ unknown('c', val);
+ break;
+ }
+ }
+ break;
case 'l':
log_name = optarg;
log_spec = 1;
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2008-02-23 9:47 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-02-15 18:44 Marking inode dirty latency > 1000 msec on XFS! Török Edwin
2008-02-22 6:31 ` Lachlan McIlroy
2008-02-22 7:16 ` David Chinner
2008-02-22 8:40 ` Török Edwin
2008-02-22 8:59 ` Török Edwin
2008-02-22 10:20 ` Török Edwin
2008-02-23 0:06 ` David Chinner
2008-02-23 9:41 ` Török Edwin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox