linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Nauman Rafique <nauman@google.com>
Cc: linux-kernel@vger.kernel.org,
	containers@lists.linux-foundation.org, dm-devel@redhat.com,
	jens.axboe@oracle.com, dpshah@google.com, lizf@cn.fujitsu.com,
	mikew@google.com, fchecconi@gmail.com, paolo.valente@unimore.it,
	ryov@valinux.co.jp, fernando@oss.ntt.co.jp,
	s-uchida@ap.jp.nec.com, taka@valinux.co.jp,
	guijianfeng@cn.fujitsu.com, jmoyer@redhat.com,
	dhaval@linux.vnet.ibm.com, balbir@linux.vnet.ibm.com,
	righi.andrea@gmail.com, m-ikeda@ds.jp.nec.com, jbaron@redhat.com,
	agk@redhat.com, snitzer@redhat.com, akpm@linux-foundation.org,
	peterz@infradead.org
Subject: Re: [PATCH 13/25] io-controller: Wait for requests to complete from last queue before new queue is scheduled
Date: Thu, 2 Jul 2009 16:17:30 -0400	[thread overview]
Message-ID: <20090702201730.GB3712@redhat.com> (raw)
In-Reply-To: <e98e18940907021309u1f784b3at409b55ba46ed108c@mail.gmail.com>

On Thu, Jul 02, 2009 at 01:09:14PM -0700, Nauman Rafique wrote:
> On Thu, Jul 2, 2009 at 1:01 PM, Vivek Goyal<vgoyal@redhat.com> wrote:
> > o Currently one can dispatch requests from multiple queues to the disk. This
> >  is true for hardware which supports queuing. So if a disk support queue
> >  depth of 31 it is possible that 20 requests are dispatched from queue 1
> >  and then next queue is scheduled in which dispatches more requests.
> >
> > o This multiple queue dispatch introduces issues for accurate accounting of
> >  disk time consumed by a particular queue. For example, if one async queue
> >  is scheduled in, it can dispatch 31 requests to the disk and then it will
> >  be expired and a new sync queue might get scheduled in. These 31 requests
> >  might take a long time to finish but this time is never accounted to the
> >  async queue which dispatched these requests.
> >
> > o This patch introduces the functionality where we wait for all the requests
> >  to finish from previous queue before next queue is scheduled in. That way
> >  a queue is more accurately accounted for disk time it has consumed. Note
> >  this still does not take care of errors introduced by disk write caching.
> >
> > o Because above behavior can result in reduced throughput, this behavior will
> >  be enabled only if user sets "fairness" tunable to 2 or higher.
> 
> Vivek,
> Did you collect any numbers for the impact on throughput from this
> patch? It seems like with this change, we can even support NCQ.
> 

Hi Nauman,

Not yet. I will try to do some impact analysis of this change and post the
results.

Thanks
Vivek

> >
> > o This patch helps in achieving more isolation between reads and buffered
> >  writes in different cgroups. buffered writes typically utilize full queue
> >  depth and then expire the queue. On the contarary, sequential reads
> >  typicaly driver queue depth of 1. So despite the fact that writes are
> >  using more disk time it is never accounted to write queue because we don't
> >  wait for requests to finish after dispatching these. This patch helps
> >  do more accurate accounting of disk time, especially for buffered writes
> >  hence providing better fairness hence better isolation between two cgroups
> >  running read and write workloads.
> >
> > Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
> > ---
> >  block/elevator-fq.c |   31 ++++++++++++++++++++++++++++++-
> >  1 files changed, 30 insertions(+), 1 deletions(-)
> >
> > diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> > index 68be1dc..7609579 100644
> > --- a/block/elevator-fq.c
> > +++ b/block/elevator-fq.c
> > @@ -2038,7 +2038,7 @@ STORE_FUNCTION(elv_slice_sync_store, &efqd->elv_slice[1], 1, UINT_MAX, 1);
> >  EXPORT_SYMBOL(elv_slice_sync_store);
> >  STORE_FUNCTION(elv_slice_async_store, &efqd->elv_slice[0], 1, UINT_MAX, 1);
> >  EXPORT_SYMBOL(elv_slice_async_store);
> > -STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 1, 0);
> > +STORE_FUNCTION(elv_fairness_store, &efqd->fairness, 0, 2, 0);
> >  EXPORT_SYMBOL(elv_fairness_store);
> >  #undef STORE_FUNCTION
> >
> > @@ -2952,6 +2952,24 @@ void *elv_fq_select_ioq(struct request_queue *q, int force)
> >        }
> >
> >  expire:
> > +       if (efqd->fairness >= 2 && !force && ioq && ioq->dispatched) {
> > +               /*
> > +                * If there are request dispatched from this queue, don't
> > +                * dispatch requests from new queue till all the requests from
> > +                * this queue have completed.
> > +                *
> > +                * This helps in attributing right amount of disk time consumed
> > +                * by a particular queue when hardware allows queuing.
> > +                *
> > +                * Set ioq = NULL so that no more requests are dispatched from
> > +                * this queue.
> > +                */
> > +               elv_log_ioq(efqd, ioq, "select: wait for requests to finish"
> > +                               " disp=%lu", ioq->dispatched);
> > +               ioq = NULL;
> > +               goto keep_queue;
> > +       }
> > +
> >        elv_ioq_slice_expired(q);
> >  new_queue:
> >        ioq = elv_set_active_ioq(q, new_ioq);
> > @@ -3109,6 +3127,17 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
> >                                 */
> >                                elv_ioq_arm_slice_timer(q, 1);
> >                        } else {
> > +                               /* If fairness >=2 and there are requests
> > +                                * dispatched from this queue, don't dispatch
> > +                                * new requests from a different queue till
> > +                                * all requests from this queue have finished.
> > +                                * This helps in attributing right disk time
> > +                                * to a queue when hardware supports queuing.
> > +                                */
> > +
> > +                               if (efqd->fairness >= 2 && ioq->dispatched)
> > +                                       goto done;
> > +
> >                                /* Expire the queue */
> >                                elv_ioq_slice_expired(q);
> >                        }
> > --
> > 1.6.0.6
> >
> >

  reply	other threads:[~2009-07-02 20:20 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-02 20:01 [RFC] IO scheduler based IO controller V6 Vivek Goyal
2009-07-02 20:01 ` [PATCH 01/25] io-controller: Documentation Vivek Goyal
2009-07-02 20:01 ` [PATCH 02/25] io-controller: Core of the B-WF2Q+ scheduler Vivek Goyal
2009-07-02 20:01 ` [PATCH 03/25] io-controller: bfq support of in-class preemption Vivek Goyal
2009-07-02 20:01 ` [PATCH 04/25] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-07-02 20:01 ` [PATCH 05/25] io-controller: Charge for time slice based on average disk rate Vivek Goyal
2009-07-02 20:01 ` [PATCH 06/25] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-07-02 20:01 ` [PATCH 07/25] io-controller: core bfq scheduler changes for hierarchical setup Vivek Goyal
2009-07-02 20:01 ` [PATCH 08/25] io-controller: cgroup related changes for hierarchical group support Vivek Goyal
2009-07-02 20:01 ` [PATCH 09/25] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-07-06  2:46   ` Gui Jianfeng
2009-07-06 14:16     ` Vivek Goyal
2009-07-07  1:40       ` [PATCH] io-controller: Get rid of css id from io cgroup Gui Jianfeng
2009-07-08 14:04         ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 10/25] io-controller: cfq changes to use hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-07-02 20:01 ` [PATCH 11/25] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-07-08  2:16   ` Gui Jianfeng
2009-07-08 14:00     ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 12/25] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
2009-07-02 20:01 ` [PATCH 13/25] io-controller: Wait for requests to complete from last queue before new queue is scheduled Vivek Goyal
2009-07-02 20:09   ` Nauman Rafique
2009-07-02 20:17     ` Vivek Goyal [this message]
2009-07-02 20:01 ` [PATCH 14/25] io-controller: Separate out queue and data Vivek Goyal
2009-07-02 20:01 ` [PATCH 15/25] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-07-02 20:01 ` [PATCH 16/25] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-07-02 20:01 ` [PATCH 17/25] io-controller: deadline " Vivek Goyal
2009-07-02 20:01 ` [PATCH 18/25] io-controller: anticipatory " Vivek Goyal
2009-07-02 20:01 ` [PATCH 19/25] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-07-02 20:01 ` [PATCH 20/25] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-08-03  2:13   ` Gui Jianfeng
2009-08-04  1:25     ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 21/25] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-07-08  3:27   ` Gui Jianfeng
2009-07-08 13:57     ` Vivek Goyal
2009-07-21  5:37   ` Gui Jianfeng
2009-07-21  5:55     ` Nauman Rafique
2009-07-21 14:01       ` Vivek Goyal
2009-07-21 17:57         ` Nauman Rafique
2009-07-02 20:01 ` [PATCH 22/25] io-controller: Per io group bdi congestion interface Vivek Goyal
2009-07-17  0:16   ` Munehiro Ikeda
2009-07-17 13:52     ` Vivek Goyal
2009-07-02 20:01 ` [PATCH 23/25] io-controller: Support per cgroup per device weights and io class Vivek Goyal
2009-07-02 20:01 ` [PATCH 24/25] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-07-02 20:01 ` [PATCH 25/25] io-controller: experimental debug patch for async queue wait before expiry Vivek Goyal
2009-07-08  3:56 ` [RFC] IO scheduler based IO controller V6 Balbir Singh
2009-07-08 13:41   ` Vivek Goyal
2009-07-08 14:39     ` Balbir Singh
2009-07-09  1:58       ` Vivek Goyal
2009-07-10  1:56 ` [PATCH] io-controller: implement per group request allocation limitation Gui Jianfeng
2009-07-13 16:03   ` Vivek Goyal
2009-07-13 21:08     ` Munehiro Ikeda
2009-07-14  7:45       ` Gui Jianfeng
2009-08-04  2:00         ` Munehiro Ikeda
2009-08-04  6:38           ` Gui Jianfeng
2009-08-04 22:37           ` Vivek Goyal
2009-07-14  7:37     ` Gui Jianfeng
2009-08-04  2:02   ` Munehiro Ikeda
2009-08-04  6:41     ` Gui Jianfeng
2009-08-04  2:04   ` Munehiro Ikeda
2009-08-04  6:45     ` Gui Jianfeng
2009-07-27  2:10 ` [RFC] IO scheduler based IO controller V6 Gui Jianfeng
2009-07-27 12:55   ` Vivek Goyal
2009-07-28  3:27     ` Vivek Goyal
2009-07-28  3:36       ` Gui Jianfeng
2009-07-28 11:36     ` Gui Jianfeng
2009-07-29  9:07     ` Gui Jianfeng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090702201730.GB3712@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=agk@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=containers@lists.linux-foundation.org \
    --cc=dhaval@linux.vnet.ibm.com \
    --cc=dm-devel@redhat.com \
    --cc=dpshah@google.com \
    --cc=fchecconi@gmail.com \
    --cc=fernando@oss.ntt.co.jp \
    --cc=guijianfeng@cn.fujitsu.com \
    --cc=jbaron@redhat.com \
    --cc=jens.axboe@oracle.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizf@cn.fujitsu.com \
    --cc=m-ikeda@ds.jp.nec.com \
    --cc=mikew@google.com \
    --cc=nauman@google.com \
    --cc=paolo.valente@unimore.it \
    --cc=peterz@infradead.org \
    --cc=righi.andrea@gmail.com \
    --cc=ryov@valinux.co.jp \
    --cc=s-uchida@ap.jp.nec.com \
    --cc=snitzer@redhat.com \
    --cc=taka@valinux.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).