public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
* Suggestion: Anti-fragmentation safety catch (RFC)
@ 2014-03-24 19:47 Martin
  2014-03-24 20:19 ` Duncan
  0 siblings, 1 reply; 4+ messages in thread
From: Martin @ 2014-03-24 19:47 UTC (permalink / raw)
  To: linux-btrfs

Just an idea:


btrfs Problem:

I've had two systems die with huge load factors >100(!) for the case
where a user program has unexpected to me been doing 'database'-like
operations and caused multiple files to become heavily fragmented. The
system eventually dies when data cannot be added to the fragmented files
faster than the real time data collection.

My example case is for two systems with btrfs raid1 using two HDDs each.
Normal write speed is about 100MByte/s. After heavy fragmentation, the
cpus are at 100% wait and i/o is a few hundred kByte/s.


Possible fix:

btrfs checks the ratio of filesize versus number of fragments and for a
bad ratio either:

1: Performs a non-cow copy to defragment the file;

2: Turns off cow for that file and gives a syslog warning for that;

3: Automatically defragments the file.



Or?


For my case, I'm not sure "2" is a good idea in case the user is
rattling through a gazillion files and the syslog gets swamped.

Unfortunately, I don't know beforehand what files to mark no-cow unless
I no-cow the entire user/applications.


Thoughts?


Thanks,
Martin


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-03-25 15:43 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-03-24 19:47 Suggestion: Anti-fragmentation safety catch (RFC) Martin
2014-03-24 20:19 ` Duncan
2014-03-25  0:57   ` Martin
2014-03-25 15:42     ` Duncan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox