linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Zhao Shuai <zhaoshuai@freebsd.org>,
	axboe@kernel.dk, ctalbott@google.com, rni@google.com,
	linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
	containers@lists.linux-foundation.org
Subject: Re: performance drop after using blkcg
Date: Tue, 11 Dec 2012 06:47:18 -0800	[thread overview]
Message-ID: <20121211144718.GF7084@htj.dyndns.org> (raw)
In-Reply-To: <20121211144336.GB5580@redhat.com>

Hello,

On Tue, Dec 11, 2012 at 09:43:36AM -0500, Vivek Goyal wrote:
> I think if one sets slice_idle=0 and group_idle=0 in CFQ, for all practical
> purposes it should become and IOPS based group scheduling.

No, I don't think it is.  You can't achieve isolation without idling
between group switches.  We're measuring slices in terms of iops but
what cfq actually schedules are still time slices, not IOs.

> For group accounting then CFQ uses number of requests from each cgroup
> and uses that information to schedule groups.
> 
> I have not been able to figure out the practical benefits of that
> approach. At least not for the simple workloads I played with. This
> approach will not work for simple things like trying to improve dependent
> read latencies in presence of heavery writers. That's the single biggest
> use case CFQ solves, IMO.

As I wrote above, it's not about accounting.  It's about scheduling
unit.

> And that happens because we stop writes and don't let them go to device
> and device is primarily dealing with reads. If some process is doing
> dependent reads and we want to improve read latencies, then either
> we need to stop flow of writes or devices are good and they always
> prioritize READs over WRITEs. If devices are good then we probably
> don't even need blkcg.
> 
> So yes, iops based appraoch is fine just that number of cases where you
> will see any service differentiation should significantly less.

No, using iops to schedule time slices would lead to that.  We just
need to be allocating and scheduling iops, and I don't think we should
be doing that from cfq.

Thanks.

-- 
tejun

  reply	other threads:[~2012-12-11 14:47 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAFVn34SxqAJe_4P-WT8MOiG-kmKKD7ge96zoHXQuGqHWPgAt+A@mail.gmail.com>
2012-12-11  7:00 ` performance drop after using blkcg Zhao Shuai
2013-08-29  3:10   ` joeytao
2013-08-29  3:20   ` joeytao
2012-12-11 14:25 ` Vivek Goyal
2012-12-11 14:27   ` Tejun Heo
2012-12-11 14:43     ` Vivek Goyal
2012-12-11 14:47       ` Tejun Heo [this message]
2012-12-11 15:02         ` Vivek Goyal
2012-12-11 15:14           ` Tejun Heo
2012-12-11 15:37             ` Vivek Goyal
2012-12-11 16:01               ` Tejun Heo
2012-12-11 16:18                 ` Vivek Goyal
2012-12-11 16:27                   ` Tejun Heo
2012-12-12  7:29   ` Zhao Shuai
2012-12-16  4:38     ` Zhu Yanhai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121211144718.GF7084@htj.dyndns.org \
    --to=tj@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=cgroups@vger.kernel.org \
    --cc=containers@lists.linux-foundation.org \
    --cc=ctalbott@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rni@google.com \
    --cc=vgoyal@redhat.com \
    --cc=zhaoshuai@freebsd.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).