linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Tejun Heo <tj@kernel.org>
Cc: Zhao Shuai <zhaoshuai@freebsd.org>,
	axboe@kernel.dk, ctalbott@google.com, rni@google.com,
	linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
	containers@lists.linux-foundation.org
Subject: Re: performance drop after using blkcg
Date: Tue, 11 Dec 2012 09:43:36 -0500	[thread overview]
Message-ID: <20121211144336.GB5580@redhat.com> (raw)
In-Reply-To: <20121211142742.GE7084@htj.dyndns.org>

On Tue, Dec 11, 2012 at 06:27:42AM -0800, Tejun Heo wrote:
> On Tue, Dec 11, 2012 at 09:25:18AM -0500, Vivek Goyal wrote:
> > In general, do not use blkcg on faster storage. In current form it
> > is at best suitable for single rotational SATA/SAS disk. I have not
> > been able to figure out how to provide fairness without group idling.
> 
> I think cfq is just the wrong approach for faster non-rotational
> devices.  We should be allocating iops instead of time slices.

I think if one sets slice_idle=0 and group_idle=0 in CFQ, for all practical
purposes it should become and IOPS based group scheduling.

For group accounting then CFQ uses number of requests from each cgroup
and uses that information to schedule groups.

I have not been able to figure out the practical benefits of that
approach. At least not for the simple workloads I played with. This
approach will not work for simple things like trying to improve dependent
read latencies in presence of heavery writers. That's the single biggest
use case CFQ solves, IMO.

And that happens because we stop writes and don't let them go to device
and device is primarily dealing with reads. If some process is doing
dependent reads and we want to improve read latencies, then either
we need to stop flow of writes or devices are good and they always
prioritize READs over WRITEs. If devices are good then we probably
don't even need blkcg.

So yes, iops based appraoch is fine just that number of cases where you
will see any service differentiation should significantly less.

Thanks
Vivek

  reply	other threads:[~2012-12-11 14:43 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAFVn34SxqAJe_4P-WT8MOiG-kmKKD7ge96zoHXQuGqHWPgAt+A@mail.gmail.com>
2012-12-11  7:00 ` performance drop after using blkcg Zhao Shuai
2013-08-29  3:10   ` joeytao
2013-08-29  3:20   ` joeytao
2012-12-11 14:25 ` Vivek Goyal
2012-12-11 14:27   ` Tejun Heo
2012-12-11 14:43     ` Vivek Goyal [this message]
2012-12-11 14:47       ` Tejun Heo
2012-12-11 15:02         ` Vivek Goyal
2012-12-11 15:14           ` Tejun Heo
2012-12-11 15:37             ` Vivek Goyal
2012-12-11 16:01               ` Tejun Heo
2012-12-11 16:18                 ` Vivek Goyal
2012-12-11 16:27                   ` Tejun Heo
2012-12-12  7:29   ` Zhao Shuai
2012-12-16  4:38     ` Zhu Yanhai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121211144336.GB5580@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=cgroups@vger.kernel.org \
    --cc=containers@lists.linux-foundation.org \
    --cc=ctalbott@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rni@google.com \
    --cc=tj@kernel.org \
    --cc=zhaoshuai@freebsd.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).