public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Jeff Moyer <jmoyer@redhat.com>
Cc: Corrado Zoccolo <czoccolo@gmail.com>,
	axboe@kernel.dk, Linux-Kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
Date: Tue, 13 Jul 2010 16:42:36 -0400	[thread overview]
Message-ID: <20100713204236.GB21044@redhat.com> (raw)
In-Reply-To: <x49aapvp1k0.fsf@segfault.boston.devel.redhat.com>

On Tue, Jul 13, 2010 at 04:30:23PM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal@redhat.com> writes:
> 
> > On Tue, Jul 13, 2010 at 03:38:11PM -0400, Jeff Moyer wrote:
> >> Corrado Zoccolo <czoccolo@gmail.com> writes:
> >> 
> >> > Can you test the attached patch, where I also added your changes to
> >> > make jbd(2) to perform sync writes?
> >> 
> >> I got new storage, so I have new numbers.  I only re-ran deadline and
> >> vanilla cfq for the fs_mark only test.  The average of 10 runs comes out
> >> like so:
> >> 
> >> deadline:    571.98
> >> vanilla cfq: 107.42
> >> patched cfq: 460.9
> >> 
> >> Mixed workload results with your suggested patch:
> >> 
> >> fs_mark: 15.65 files/sec
> >> fio: 132.5 MB/s
> >> 
> >> So, again, not looking great for the mixed workload, but the patch
> >> does improve the fs_mark only case.  Looking at the blktrace data shows
> >> that the jbd2 thread preempts the fs_mark thread at all the right
> >> times.  The only thing holding throughput back is the whole notion that
> >> we need to only dispatch from one queue (even though the storage is
> >> capable of serving both the reads and writes simultaneously).
> >> 
> >> I added in the patch that allows the simultaneous dispatch of both reads
> >> and writes, and here are the results from that run:
> >> 
> >> fs_mark: 15.975 files/sec
> >> fio: 132.4 MB/s
> >> 
> >> So, it looks like that didn't help.  The reason this patch doesn't come
> >> close to the yield patch in the mixed workload is because the yield
> >> patch set allows the fs_mark process to continue to issue I/O.  With
> >> your patch, the fs_mark process does 64KB of I/O, the jbd2 thread does
> >> the journal commit, and then the fio process runs again.  Given that the
> >> fs_mark process typically only uses a small fraction of its time slice,
> >> you end up with an unfair balance.
> >
> > Hi Jeff,
> >
> > This is little strange. Given the fact that now both fs_mark and jbd
> > threads are on sync-noidle tree, we should have idled on sync-noidle
> > tree to provide fairness and that should have made sure that fs_mark/jbd
> > do more IO and slice is not lost to fio thread.
> >
> > Not sure what is happening though in practice. Only you can look at
> > traces more closely and see if timer is being armed or not. 
> 
> Vivek, if you want to look at traces, just ask.  I'd be happy to show
> them to you, upload them, whatever.  I'm not sure why you think
> otherwise (though I wouldn't blame you for not wanting to look at
> them!).

I don't mind looking at traces. Do let me know where can I access those.

> 
> Now, to answer your question, the jbd2 thread runs and issues a barrier,
> which causes a forced dispatch of requests.  After that a new queue is
> selected, and since the fs_mark thread is blocked on the journal commit,
> it's always the fio process that gets to run.

Ok, that explains it.  So somehow after the barrier, fio always wins
as issues next read request before the fs_mark is able to issue the
next set of writes.

> 
> This, of course, raises the question of why the blk_yield patches didn't
> run into the same problem.  Looking back at some saved traces, I don't
> see WBS (write barrier sync) requests, so I wonder if barriers weren't
> supported by my last storage system.

I think that blk_yield patches will also run into the same issue if
barriers are enabled.

Thanks
Vivek

  reply	other threads:[~2010-07-13 20:42 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-07 15:22 [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling Corrado Zoccolo
2010-07-07 15:56 ` Corrado Zoccolo
2010-07-07 17:03 ` Jeff Moyer
2010-07-07 17:39   ` Corrado Zoccolo
2010-07-07 20:06     ` Jeff Moyer
2010-07-08 14:38       ` Corrado Zoccolo
2010-07-09 10:33       ` Corrado Zoccolo
2010-07-09 13:23         ` Vivek Goyal
2010-07-09 14:07         ` Jeff Moyer
2010-07-09 19:45           ` Corrado Zoccolo
2010-07-09 20:48             ` Jeff Moyer
2010-07-13 19:38         ` Jeff Moyer
2010-07-13 19:56           ` Vivek Goyal
2010-07-13 20:30             ` Jeff Moyer
2010-07-13 20:42               ` Vivek Goyal [this message]
2010-07-19 16:08                 ` Jeff Moyer
2010-07-19 20:31                   ` Vivek Goyal
2010-07-20 14:02                     ` Jeff Moyer
2010-07-20 14:11                   ` Christoph Hellwig
2010-07-20 14:26                     ` Vivek Goyal
2010-07-20 19:10                       ` Corrado Zoccolo
2010-07-20 19:32                         ` Vivek Goyal
2010-07-13 21:00               ` Jeff Moyer
2010-07-07 17:50   ` Vivek Goyal
2010-07-08 14:35   ` Vivek Goyal
2010-07-08 14:38     ` Jeff Moyer
2010-07-08 14:45     ` Corrado Zoccolo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100713204236.GB21044@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=czoccolo@gmail.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox