linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@us.ibm.com>
To: "Theodore Ts'o" <tytso@mit.edu>
Cc: Ext4 Developers List <linux-ext4@vger.kernel.org>
Subject: Re: [PATCH v4 0/3] dioread_nolock patch
Date: Tue, 16 Feb 2010 13:07:28 -0800	[thread overview]
Message-ID: <20100216210728.GO29569@tux1.beaverton.ibm.com> (raw)
In-Reply-To: <1263583812-21355-1-git-send-email-tytso@mit.edu>

On Fri, Jan 15, 2010 at 02:30:09PM -0500, Theodore Ts'o wrote:

> The plan is to merge this for 2.6.34.  I've looked this over pretty
> carefully, but another pair of eyes would be appreciated, especially if

I don't have a high speed disk but it was suggested that I give this patchset a
whirl anyway, so down the rabbit hole I went.  I created a 16GB ext4 image in
an equally big tmpfs, then ran the read/readall directio tests in ffsb to see
if I could observe any difference.  The kernel is 2.6.33-rc8, and the machine
in question has 2 Xeon E5335 processors and 24GB of RAM.  I reran the test
several times, with varying thread counts, to produce the table below.  The
units are MB/s.

For the dio_lock case, mount options were: rw,relatime,barrier=1,data=ordered.
For the dio_nolock case, they were: rw,relatime,barrier=1,data=ordered,dioread_nolock.

	dio_nolock	dio_lock
threads	read	readall	read	readall
1	37.6	149	39	159
2	59.2	245	62.4	246
4	114	453	112	445
8	111	444	115	459
16	109	442	113	448
32	114	443	121	484
64	106	422	108	434
128	104	417	101	393
256	101	412	90.5	366
512	93.3	377	84.8	349
1000	87.1	353	88.7	348

It would seem that the old code paths are faster with a small number of
threads, but the new patch seems to be faster when the thread counts become
very high.  That said, I'm not all that familiar with what exactly tmpfs does,
or how well it mimicks an SSD (though I wouldn't be surprised to hear
"poorly").  This of course makes me wonder--do other people see results like
this, or is this particular to my harebrained setup?

For that matter, do I need to have more patches than just 2.6.33-rc8 and the
four posted in this thread?

I also observed that I could make the kernel spit up "Process hung for more
than 120s!" messages if I happened to be running ffsb on a real disk during a
heavy directio write load.  I'll poke around on that a little more and write
back when I have more details.

For poweroff testing, could one simulate a power failure by running IO
workloads in a VM and then SIGKILLing the VM?  I don't remember seeing any sort
of powerfail test suite from the Googlers, but my mail client has been drinking
out of firehoses lately. ;)

--D

  parent reply	other threads:[~2010-02-16 21:07 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-01-15 19:30 [PATCH v4 0/3] dioread_nolock patch Theodore Ts'o
2010-01-15 19:30 ` [PATCH v4 1/3] ext4: mechanical change on dio get_block code in prepare for it to be used by buffer write Theodore Ts'o
2010-01-17 14:36   ` Aneesh Kumar K. V
2010-01-17 16:19     ` Eric Sandeen
2010-01-17 16:42       ` Aneesh Kumar K. V
2010-01-18  3:57       ` tytso
2010-01-15 19:30 ` [PATCH v4 2/3] ext4: use ext4_get_block_write in " Theodore Ts'o
2010-01-16  2:17   ` tytso
2010-01-17 14:21   ` Aneesh Kumar K. V
2010-01-18  5:25     ` Jiaying Zhang
2010-01-15 19:30 ` [PATCH v4 3/3] ext4: Use direct_IO_no_locking in ext4 dio read Theodore Ts'o
2010-01-17 14:19   ` Aneesh Kumar K. V
2010-01-15 19:39 ` [PATCH v4 0/3] dioread_nolock patch Ric Wheeler
2010-01-15 19:52 ` Eric Sandeen
2010-01-15 20:15   ` tytso
2010-01-15 20:17     ` Eric Sandeen
2010-01-15 21:47       ` Michael Rubin
2010-01-22 20:47         ` Valerie Aurora
2010-02-20  0:56           ` Michael Rubin
2010-02-23  0:36             ` Andreas Dilger
2010-02-16 21:07 ` Darrick J. Wong [this message]
2010-02-17 19:34   ` Jiaying Zhang
2010-02-19 21:25     ` Darrick J. Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100216210728.GO29569@tux1.beaverton.ibm.com \
    --to=djwong@us.ibm.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).