From: Ole Langbehn <ole@inoio.de>
To: linux-btrfs@vger.kernel.org
Subject: [4.4.1] btrfs-transacti frequent high CPU usage despite little fragmentation
Date: Wed, 16 Mar 2016 10:45:28 +0100 [thread overview]
Message-ID: <56E92B38.10605@inoio.de> (raw)
Hi,
on my box, frequently, mostly while using firefox, any process doing
disk IO freezes while btrfs-transacti has a spike in CPU usage for more
than a minute.
I know about btrfs' fragmentation issue, but have a couple of questions:
* While btrfs-transacti is spiking, can I trace which files are the
culprit somehow?
* On my setup, with measured fragmentation, are the CPU spike durations
and freezes normal?
* Can I alleviate the situation by anything except defragmentation?
Any insight is appreciated.
Details:
I have a 1TB SSD with a large btrfs partition:
# btrfs filesystem usage /
Overall:
Device size: 915.32GiB
Device allocated: 915.02GiB
Device unallocated: 306.00MiB
Device missing: 0.00B
Used: 152.90GiB
Free (estimated): 751.96GiB (min: 751.96GiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)
Data,single: Size:901.01GiB, Used:149.35GiB
/dev/sda2 901.01GiB
Metadata,single: Size:14.01GiB, Used:3.55GiB
/dev/sda2 14.01GiB
System,single: Size:4.00MiB, Used:128.00KiB
/dev/sda2 4.00MiB
Unallocated:
/dev/sda2 306.00MiB
I've done the obvious and defragmented files. Some files were
defragmented from 10k+ to still more than 100 extents. But the problem
persisted or came back very quickly. Just now i re-ran defragmentation
with the following results (only showing files with more than 100
extents before fragmentation):
extents before / extents after / anonymized path
103 / 1 /home/foo/.mozilla/firefox/foo.default/formhistory.sqlite:
133 / 1
/home/foo/.thunderbird/foo.default/ImapMail/imap.example.org/ml-btrfs:
155 / 1 /var/log/messages:
158 / 30 /home/foo/.thunderbird/foo.default/ImapMail/mail.example.org/INBOX:
160 / 32 /home/foo/.thunderbird/foo.default/calendar-data/cache.sqlite:
255 / 255 /var/lib/docker/devicemapper/devicemapper/data:
550 / 1 /home/foo/.cache/chromium/Default/Cache/data_1:
627 / 1 /home/foo/.cache/chromium/Default/Cache/data_2:
1738 / 25 /home/foo/.cache/chromium/Default/Cache/data_3:
1764 / 77 /home/foo/.mozilla/firefox/foo.default/places.sqlite:
4414 / 284 /home/foo/.digikam/thumbnails-digikam.db:
6576 / 3 /home/foo/.digikam/digikam4.db:
So fragmentation came back quickly, and the firefox places.sqlite file
could explain why the system freezes while browsing.
BTW: I did a VACUUM on the sqlite db and afterwards it had 1 extent.
Expected, just saying that vacuuming seems to be a good measure for
defragmenting sqlite databases.
I am using snapper and have about 40 snapshots going back for some
months. Those are read only. Could that have any effect?
Cheers,
Ole
next reply other threads:[~2016-03-16 9:52 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-16 9:45 Ole Langbehn [this message]
2016-03-17 10:51 ` [4.4.1] btrfs-transacti frequent high CPU usage despite little fragmentation Duncan
2016-03-18 9:33 ` Ole Langbehn
2016-03-18 23:06 ` Duncan
2016-03-19 20:31 ` Ole Langbehn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56E92B38.10605@inoio.de \
--to=ole@inoio.de \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).