linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <jens.axboe@oracle.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Jeff Moyer <jmoyer@redhat.com>,
	linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org
Subject: Re: [PATCH 2/4] block: Implement a blk_yield function to voluntarily give up the I/O scheduler.
Date: Thu, 15 Apr 2010 12:33:23 +0200	[thread overview]
Message-ID: <20100415103323.GW27497@kernel.dk> (raw)
In-Reply-To: <20100414214654.GD3167@redhat.com>

On Wed, Apr 14 2010, Vivek Goyal wrote:
> On Wed, Apr 14, 2010 at 05:17:04PM -0400, Jeff Moyer wrote:
> > This patch implements a blk_yield to allow a process to voluntarily
> > give up its I/O scheduler time slice.  This is desirable for those processes
> > which know that they will be blocked on I/O from another process, such as
> > the file system journal thread.  Following patches will put calls to blk_yield
> > into jbd and jbd2.
> > 
> > Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
> > ---
> >  block/blk-core.c         |    6 ++++
> >  block/cfq-iosched.c      |   70 ++++++++++++++++++++++++++++++++++++++++++++++
> >  block/elevator.c         |    8 +++++
> >  include/linux/blkdev.h   |    1 +
> >  include/linux/elevator.h |    3 ++
> >  5 files changed, 88 insertions(+), 0 deletions(-)
> > 
> > diff --git a/block/blk-core.c b/block/blk-core.c
> > index 9fe174d..3e4e98c 100644
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
> > @@ -323,6 +323,12 @@ void blk_unplug(struct request_queue *q)
> >  }
> >  EXPORT_SYMBOL(blk_unplug);
> >  
> > +void blk_yield(struct request_queue *q)
> > +{
> > +	elv_yield(q);
> > +}
> > +EXPORT_SYMBOL(blk_yield);
> > +
> >  /**
> >   * blk_start_queue - restart a previously stopped queue
> >   * @q:    The &struct request_queue in question
> > diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> > index ef59ab3..8a300ab 100644
> > --- a/block/cfq-iosched.c
> > +++ b/block/cfq-iosched.c
> > @@ -292,6 +292,7 @@ struct cfq_data {
> >  };
> >  
> >  static struct cfq_group *cfq_get_next_cfqg(struct cfq_data *cfqd);
> > +static void cfq_yield_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq);
> >  
> >  static struct cfq_rb_root *service_tree_for(struct cfq_group *cfqg,
> >  					    enum wl_prio_t prio,
> > @@ -320,6 +321,7 @@ enum cfqq_state_flags {
> >  	CFQ_CFQQ_FLAG_split_coop,	/* shared cfqq will be splitted */
> >  	CFQ_CFQQ_FLAG_deep,		/* sync cfqq experienced large depth */
> >  	CFQ_CFQQ_FLAG_wait_busy,	/* Waiting for next request */
> > +	CFQ_CFQQ_FLAG_yield,		/* Allow another cfqq to run */
> >  };
> >  
> >  #define CFQ_CFQQ_FNS(name)						\
> > @@ -349,6 +351,7 @@ CFQ_CFQQ_FNS(coop);
> >  CFQ_CFQQ_FNS(split_coop);
> >  CFQ_CFQQ_FNS(deep);
> >  CFQ_CFQQ_FNS(wait_busy);
> > +CFQ_CFQQ_FNS(yield);
> >  #undef CFQ_CFQQ_FNS
> >  
> >  #ifdef CONFIG_DEBUG_CFQ_IOSCHED
> > @@ -1566,6 +1569,7 @@ __cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> >  
> >  	cfq_clear_cfqq_wait_request(cfqq);
> >  	cfq_clear_cfqq_wait_busy(cfqq);
> > +	cfq_clear_cfqq_yield(cfqq);
> >  
> >  	/*
> >  	 * If this cfqq is shared between multiple processes, check to
> > @@ -1887,6 +1891,9 @@ static void cfq_dispatch_insert(struct request_queue *q, struct request *rq)
> >  
> >  	cfqd->rq_in_flight[cfq_cfqq_sync(cfqq)]++;
> >  	cfqq->nr_sectors += blk_rq_sectors(rq);
> > +
> > +	if (cfq_cfqq_yield(cfqq) && RB_EMPTY_ROOT(&cfqq->sort_list))
> > +		cfq_yield_cfqq(cfqd, cfqq);
> 
> Jeff,
> 
> I am wondering if cfq_select_queue() will be a better place for yielding
> the queue.
> 
> 	if (cfq_cfqq_yield(cfqq) && RB_EMPTY_ROOT(&cfqq->sort_list))
> 			goto expire;
> 
> We can avoid one unnecessary __blk_run_queue().

Agree, doing it on insert is not the right place.

-- 
Jens Axboe


  reply	other threads:[~2010-04-15 10:33 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-04-14 21:17 [PATCH 0/4 v3] ext3/4: enhance fsync performance when using CFQ Jeff Moyer
2010-04-14 21:17 ` [PATCH 1/4] cfq-iosched: Keep track of average think time for the sync-noidle workload Jeff Moyer
2010-04-14 21:37   ` Vivek Goyal
2010-04-14 23:06     ` Jeff Moyer
2010-04-14 21:17 ` [PATCH 2/4] block: Implement a blk_yield function to voluntarily give up the I/O scheduler Jeff Moyer
2010-04-14 21:46   ` Vivek Goyal
2010-04-15 10:33     ` Jens Axboe [this message]
2010-04-15 15:49       ` Jeff Moyer
2010-04-14 21:17 ` [PATCH 3/4] jbd: yield the device queue when waiting for commits Jeff Moyer
2010-04-14 21:17 ` [PATCH 4/4] jbd2: yield the device queue when waiting for journal commits Jeff Moyer
2010-04-15 10:33   ` Jens Axboe
2010-04-15 10:33 ` [PATCH 0/4 v3] ext3/4: enhance fsync performance when using CFQ Jens Axboe
2010-04-15 13:05   ` Jeff Moyer
2010-04-15 13:08     ` Jens Axboe
2010-04-15 13:13       ` Jeff Moyer
2010-04-15 14:03         ` Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2010-05-18 18:20 [PATCH 0/4 v4] " Jeff Moyer
2010-05-18 18:20 ` [PATCH 2/4] block: Implement a blk_yield function to voluntarily give up the I/O scheduler Jeff Moyer
2010-05-18 21:07   ` Vivek Goyal
2010-05-18 21:44   ` Vivek Goyal
2010-06-01 20:01     ` Jeff Moyer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100415103323.GW27497@kernel.dk \
    --to=jens.axboe@oracle.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=vgoyal@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).