From: Sergei Trofimovich <slyich@gmail.com>
To: linux-btrfs@vger.kernel.org
Subject: btrfs hates lone HDDs on manycore systems
Date: Wed, 20 Jun 2012 23:56:52 +0300 [thread overview]
Message-ID: <20120620235652.606e2b43@sf.home> (raw)
[-- Attachment #1: Type: text/plain, Size: 2808 bytes --]
Most of io workers in btrfs don't take into account amount
of disks they deal with:
fs/btrfs/disk-io.c: fs_info->thread_pool_size = min_t(unsigned long, num_online_cpus() + 2, 8);
It might not be a problem for 'write-only' workloads,
but it's a serious problem for read/write ones.
Let's consider simple setup:
- dualcore (core2) laptop with sole HDD (5200 rpm)
- slightly aged btrfs: ~50% of 200GB is filled
and used as / (nothing special: stock mkfs.btrfs /dev/root)
- kernel does not matter much. any 3.3.0+ would fit.
3.5.0-rc3 runs here
- tried both anticipatory and deadline schedulers. does not
seem to matter that much
My "benchmark"[1] is gcc unpacking/perm-sanitizing. It
emulates typical workload of source based package manager.
When [1] is ran as 'sh torture.sh':
- first we see at least 4 'btrfs-endio-wri' hammering disk
with random reads and writes in the 'untar' phase
- then we see 4 'btrfs-delayed-m' with similar effect
in the 'chmod' phase
If it would be a quad-cpu laptop (typical i5) we would get 6(!)
threads chewing disk, so you get worse behaviour. System
becomes completely unusable. Right now I try to mitigate it
with 'thread_pool' mount option, but it looks like a crude hack.
Would be nice to have amount of parallel readers comparable to
number underlying spinning devices.
Thanks for your patience!
[1] torture.sh:
#!/bin/sh
gcc_url=http://distfiles.gentoo.org/distfiles/gcc-4.7.0.tar.bz2
gcc_tarball=gcc-4.7.0.tar.bz2
gcc_dir=gcc-4.7.0
[ -f "$gcc_tarball" ] || wget "$gcc_url"
chatty() {
echo "RUN: $@"
/usr/bin/time "$@"
}
torture() {
chatty rm -rf "${gcc_dir}"
### iotop pattern: scattered seeks/reads to death
#25499 be/4 root 60.34 K/s 79.20 K/s 0.00 % 99.99 % [btrfs-endio-wri]
#28151 be/4 root 56.57 K/s 60.34 K/s 0.00 % 99.73 % [btrfs-endio-wri]
# 6181 be/4 root 56.57 K/s 71.66 K/s 0.00 % 96.96 % [flush-btrfs-1]
#23881 be/4 root 52.80 K/s 52.80 K/s 0.00 % 93.70 % [btrfs-endio-wri]
chatty tar -xjf "${gcc_tarball}"
### iotop pattern: scattered seeks/reads to death
#29109 be/4 slyfox 870.05 K/s 0.00 B/s 0.00 % 97.05 % chmod -R 700 gcc-4.7.0/
#28067 be/4 root 162.66 K/s 949.49 K/s 0.00 % 77.72 % [btrfs-delayed-m]
#28164 be/4 root 22.70 K/s 215.62 K/s 0.00 % 14.23 % [btrfs-delayed-m]
# 4690 be/4 root 0.00 B/s 15.13 K/s 0.00 % 0.00 % [btrfs-delayed-m]
echo "now look at iotop"
chatty chmod -R 700 "${gcc_dir}"/
chatty sync
}
torture
torture
torture
torture
--
Sergei
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
reply other threads:[~2012-06-20 20:51 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120620235652.606e2b43@sf.home \
--to=slyich@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).