linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* fsck memory usage
@ 2013-04-17 15:10 Subranshu Patel
  2013-04-17 23:07 ` Theodore Ts'o
  0 siblings, 1 reply; 6+ messages in thread
From: Subranshu Patel @ 2013-04-17 15:10 UTC (permalink / raw)
  To: linux-ext4

I performed some recovery (fsck) tests with large EXT4 filesystem. The
filesystem size was 500GB (3 million files, 5000 directories).
Perfomed force recovery on the clean filesystem and measured the
memory usage, which was around 2 GB.

Then I performed metadata corruption - 10% of the files, 10% of the
directories and some superblock attributes using debugfs. Then I
executed fsck to find a memory usage of around 8GB, a much larger
value.

1. Is there a way to reduce the memory usage (apart from scratch_files
option as it increases the recovery time time)

2. This question is not related to this EXT4 mailing list. But in real
scenario how this kind of situation (large memory usage) is handled in
large scale filesystem deployment when actual filesystem corruption
occurs (may be due to some fault in hardware/controller)

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2013-05-06  1:27 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-17 15:10 fsck memory usage Subranshu Patel
2013-04-17 23:07 ` Theodore Ts'o
2013-04-18 18:34   ` Andreas Dilger
2013-05-01  2:42   ` Subranshu Patel
2013-05-01  4:09     ` Theodore Ts'o
2013-05-06  1:27     ` Andreas Dilger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).