linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Jeff Moyer <jmoyer@redhat.com>
Cc: Linux Kernel <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>,
	Linux-FSDevel <linux-fsdevel@vger.kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Jens Axboe <axboe@kernel.dk>, Dave Chinner <david@fromorbit.com>
Subject: Re: [PATCH 6/6] cfq: Increase default value of target_latency
Date: Thu, 26 Jun 2014 18:45:00 +0100	[thread overview]
Message-ID: <20140626174500.GI10819@suse.de> (raw)
In-Reply-To: <x49simr4ntz.fsf@segfault.boston.devel.redhat.com>

On Thu, Jun 26, 2014 at 12:50:32PM -0400, Jeff Moyer wrote:
> Mel Gorman <mgorman@suse.de> writes:
> 
> > On Thu, Jun 26, 2014 at 11:36:50AM -0400, Jeff Moyer wrote:
> >> Right, and I guess I hadn't considered that case as I thought folks used
> >> more than one spinning disk for such workloads.
> >> 
> >
> > They probably are but by and large my IO testing is based on simple
> > storage. The reasoning is that if we get the simple case wrong then we
> > probably are getting the complex case wrong too or at least not performing
> > as well as we should. I also don't use SSD on my own machines for the
> > same reason.
> 
> A single disk is actually the hard case in this instance, but I
> understand what you're saying.  ;-)
> 
> >> My main reservation about this change is that you've only provided
> >> numbers for one benchmark. 
> >
> > The other obvious one to run would be pgbench workloads but it's a rathole of
> > arguing whether the configuration is valid and whether it's inappropriate
> > to test on simple storage. The tiobench tests alone take a long time to
> > complete -- 1.5 hours on a simple machine, 7 hours on a low-end NUMA machine.
> 
> And we should probably run our standard set of I/O exercisers at the
> very least.  But, like I said, it seems like wasted effort.
> 

Out of curiousity, what do you consider to be the standard set of I/O
exercisers? I have a whole battery of them that are run against major
releases to track performance over time -- tiobench (it's stupid, but too
many people use it), fsmark used in various configurations (single/multi
threaded, zero-sized and large files), postmark (file sizes fairly
small, working set 2xRAM), bonnie++ (2xRAM), ffsb used in a mail server
configuration (taken from btrfs tests), dbench3 (checking in-memory updates,
not a realistic IO benchmark), dbench4 (bit more realistic although high
thread counts it gets silly and overall it's not a stable predictor of
performance), sysbench in various configurations, pgbench used in limited
configurations, stutter which tends to hit the worse-case interactivity
issues experienced on desktops and kernel builds are the main ones. It
takes days to churn through the full set of tests which is why I don't do
it for a patch series.  I selected tiobench this time because it was the
most reliable test to cover both single and multiple-sources-of-IO cases.
If I merge a major change I'll usually then watch the next major release
and double check that nothing else broke.

> >> To bump the default target_latency, ideally
> >> we'd know how it affects other workloads.  However, I'm having a hard
> >> time justifying putting any time into this for a couple of reasons:
> >> 1) blk-mq pretty much does away with the i/o scheduler, and that is the
> >>    future
> >> 2) there is work in progress to convert cfq into bfq, and that will
> >>    essentially make any effort put into this irrelevant (so it might be
> >>    interesting to test your workload with bfq)
> >> 
> >
> > Ok, you've convinced me and I'll drop this patch. For anyone based on
> > kernels from around this time they can tune CFQ or buy a better disk.
> > Hopefully they will find this via Google.
> 
> Funny, I wasn't weighing in against your patch.  I was merely indicating
> that I personally wasn't going to invest the time to validate it.  But,
> if you're ok with dropping it, that's obviously fine with me.
> 

I fear the writing is on the wall that it'll never pass the "have you
tested every workload" test and no matter what a counter-example will be
found where it's the wrong setting. If CFQ is going to be irrelevant soon
it's just not worth wasting the electricity against a mainline kernel.

I'm still interested in what you consider your standard set of IO exercisers
though because I can slot any missing parts into the tests that run for
every mainline release.

The main one I'm missing is the postgres folks fsync benchmark. I wrote
the automation months ago but never activated it because there are enough
known problems already.

Thanks.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2014-06-26 17:45 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-25  7:58 [PATCH 0/6] Improve sequential read throughput v2 Mel Gorman
2014-06-25  7:58 ` [PATCH 1/6] mm: pagemap: Avoid unnecessary overhead when tracepoints are deactivated Mel Gorman
2014-06-25  7:58 ` [PATCH 2/6] mm: Rearrange zone fields into read-only, page alloc, statistics and page reclaim lines Mel Gorman
2014-06-25  7:58 ` [PATCH 3/6] mm: vmscan: Do not reclaim from lower zones if they are balanced Mel Gorman
2014-06-25 23:32   ` Andrew Morton
2014-06-26 10:17     ` Mel Gorman
2014-06-25  7:58 ` [PATCH 4/6] mm: page_alloc: Reduce cost of the fair zone allocation policy Mel Gorman
2014-06-25  7:58 ` [PATCH 5/6] mm: page_alloc: Reduce cost of dirty zone balancing Mel Gorman
2014-06-25 23:35   ` Andrew Morton
2014-06-26  8:43     ` Mel Gorman
2014-06-26 14:37       ` Johannes Weiner
2014-06-26 14:56         ` Mel Gorman
2014-06-26 15:11           ` Johannes Weiner
2014-06-25  7:58 ` [PATCH 6/6] cfq: Increase default value of target_latency Mel Gorman
2014-06-26 15:36   ` Jeff Moyer
2014-06-26 16:19     ` Mel Gorman
2014-06-26 16:50       ` Jeff Moyer
2014-06-26 17:45         ` Mel Gorman [this message]
2014-06-26 18:04           ` Jeff Moyer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140626174500.GI10819@suse.de \
    --to=mgorman@suse.de \
    --cc=axboe@kernel.dk \
    --cc=david@fromorbit.com \
    --cc=hannes@cmpxchg.org \
    --cc=jmoyer@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).