From: Tao Ma <tm@tao.ma>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: Tejun Heo <tj@kernel.org>,
axboe@kernel.dk, ctalbott@google.com, rni@google.com,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
containers@lists.linux-foundation.org,
Shaohua Li <shli@kernel.org>
Subject: Re: IOPS based scheduler (Was: Re: [PATCH 18/21] blkcg: move blkio_group_conf->weight to cfq)
Date: Wed, 04 Apr 2012 01:26:06 +0800 [thread overview]
Message-ID: <4F7B32AE.7050900@tao.ma> (raw)
In-Reply-To: <20120403164959.GJ5913@redhat.com>
On 04/04/2012 12:50 AM, Vivek Goyal wrote:
> On Wed, Apr 04, 2012 at 12:36:24AM +0800, Tao Ma wrote:
>
> [..]
>>> - Can't we just set the slice_idle=0 and "quantum" to some high value
>>> say "64" or "128" and achieve similar results to iops based scheduler?
>> yes, I should say cfq with slice_idle = 0 works well in most cases. But
>> if it comes to blkcg with ssd, it is really a disaster. You know, cfq
>> has to choose between different cgroups, so even if you choose 1ms as
>> the service time for each cgroup(actually in my test, only >2ms can work
>> reliably). the latency for some requests(which have been sent by the
>> user while not submitting to the driver) is really too much for the
>> application. I don't think there is a way to resolve it in cfq.
>
> Ok, so now you are saying that CFQ as such is not a problem but blkcg
> logic in CFQ is an issue.
>
> What's the issue there? I think the issue there also is group idling.
> If you set group_idle=0, that idling will be cut down and switching
> between groups will be fast. That's a different thing that in the
> process you will most likely lose service differentiation also for
> most of the workloads.
No, group_idle=0 doesn't help. We don't have problem with idling, the
disk is busy for all the tasks, we just want it to be proportional and
time endurable.
>
>>
>>>
>>> In theory, above will cut down on idling and try to provide fairness in
>>> terms of time. I thought fairness in terms of time is most fair. The
>>> most common problem is measurement of time is not attributable to
>>> individual queue in an NCQ hardware. I guess that throws time measurement
>>> of out the window until and unless we have a better algorithm to measure
>>> time in NCQ environment.
>>>
>>> I guess then we can just replace time with number of requests dispatched
>>> from a process queue. Allow it to dispatch requests for some time and
>>> then schedule it out and put it back on service tree and charge it
>>> according to its weight.
>> As I have said, in this case, the minimal time(1ms) multiple the group
>> number is too much for a ssd.
>>
>> If we can use iops based scheduler, we can use iops_weight for different
>> cgroups and switch cgroup according to this number. So all the
>> applications can have a moderate response time which can be estimated.
>
> How iops_weight and switching different than CFQ group scheduling logic?
> I think shaohua was talking of using similar logic. What would you do
> fundamentally different so that without idling you will get service
> differentiation?
I am thinking of differentiate different groups with iops, so if there
are 3 groups(the weight are 100, 200, 300) we can let them submit 1 io,
2 io and 3 io in a round-robin way. With a intel ssd, every io can be
finished within 100us. So the maximum latency for one io is about 600us,
still less than 1ms. But with cfq, if all the cgroups are busy, we have
to switch between these group in ms which means the maximum latency will
be 6ms. It is terrible for some applications since they use ssds now.
>
> If you explain your logic in detail, it will help.
>
> BTW, in last mail you mentioned that in iops_mode() we make use of time.
> That's not the case. in iops_mode() we charge group based on number of
> requests dispatched. (slice_dispatch records number of requests dispatched
> from the queue in that slice). So to me counting number of requests
> instead of time will effectively switch CFQ to iops based scheduler, isn't
> it?
yes, iops_mode in cfq is calculated iops, but it is switched according
to the time slice, right? So it can't resolve the problem I mentioned above.
Thanks
Tao
> Thanks
> Vivek
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
next prev parent reply other threads:[~2012-04-03 17:26 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-28 22:51 [PATCHSET] block: modularize blkcg config and stat file handling Tejun Heo
2012-03-28 22:51 ` [PATCH 01/21] blkcg: remove unused @pol and @plid parameters Tejun Heo
2012-03-28 22:51 ` [PATCH 02/21] blkcg: BLKIO_STAT_CPU_SECTORS doesn't have subcounters Tejun Heo
2012-03-28 22:51 ` [PATCH 03/21] blkcg: introduce blkg_stat and blkg_rwstat Tejun Heo
2012-03-28 22:51 ` [PATCH 04/21] blkcg: restructure statistics printing Tejun Heo
2012-03-28 22:51 ` [PATCH 05/21] blkcg: drop blkiocg_file_write_u64() Tejun Heo
2012-03-28 22:51 ` [PATCH 06/21] blkcg: restructure configuration printing Tejun Heo
2012-03-28 22:51 ` [PATCH 07/21] blkcg: restructure blkio_group configruation setting Tejun Heo
2012-03-28 22:51 ` [PATCH 08/21] blkcg: blkg_conf_prep() Tejun Heo
2012-03-28 22:53 ` Tejun Heo
2012-03-28 22:51 ` [PATCH 09/21] blkcg: export conf/stat helpers to prepare for reorganization Tejun Heo
2012-03-28 22:51 ` [PATCH 10/21] blkcg: implement blkio_policy_type->cftypes Tejun Heo
2012-03-28 22:51 ` [PATCH 11/21] blkcg: move conf/stat file handling code to policies Tejun Heo
2012-03-28 22:51 ` [PATCH 12/21] cfq: collapse cfq.h into cfq-iosched.c Tejun Heo
2012-03-28 22:51 ` [PATCH 13/21] blkcg: move statistics update code to policies Tejun Heo
2012-03-28 22:51 ` [PATCH 14/21] blkcg: cfq doesn't need per-cpu dispatch stats Tejun Heo
2012-03-28 22:51 ` [PATCH 15/21] blkcg: add blkio_policy_ops operations for exit and stat reset Tejun Heo
2012-03-28 22:51 ` [PATCH 16/21] blkcg: move blkio_group_stats to cfq-iosched.c Tejun Heo
2012-03-28 22:51 ` [PATCH 17/21] blkcg: move blkio_group_stats_cpu and friends to blk-throttle.c Tejun Heo
2012-03-28 22:51 ` [PATCH 18/21] blkcg: move blkio_group_conf->weight to cfq Tejun Heo
2012-04-01 21:09 ` Vivek Goyal
2012-04-01 21:22 ` Tejun Heo
2012-04-02 21:39 ` Tao Ma
2012-04-02 21:49 ` Tejun Heo
2012-04-02 22:03 ` Tao Ma
2012-04-02 22:17 ` Tejun Heo
2012-04-02 22:20 ` Tao Ma
2012-04-02 22:25 ` Vivek Goyal
2012-04-02 22:28 ` Tejun Heo
2012-04-02 22:41 ` Tao Ma
2012-04-03 15:37 ` IOPS based scheduler (Was: Re: [PATCH 18/21] blkcg: move blkio_group_conf->weight to cfq) Vivek Goyal
2012-04-03 16:36 ` Tao Ma
2012-04-03 16:50 ` Vivek Goyal
2012-04-03 17:26 ` Tao Ma [this message]
2012-04-04 12:35 ` Shaohua Li
2012-04-04 13:37 ` Vivek Goyal
2012-04-04 14:52 ` Shaohua Li
2012-04-04 15:10 ` Vivek Goyal
2012-04-04 16:06 ` Tao Ma
2012-04-04 16:45 ` Tao Ma
2012-04-04 16:50 ` Vivek Goyal
2012-04-04 17:17 ` Vivek Goyal
2012-04-04 17:18 ` Tao Ma
2012-04-04 17:27 ` Vivek Goyal
2012-04-04 18:22 ` Vivek Goyal
2012-04-04 18:36 ` Tao Ma
2012-04-04 13:31 ` Vivek Goyal
2012-03-28 22:51 ` [PATCH 19/21] blkcg: move blkio_group_conf->iops and ->bps to blk-throttle Tejun Heo
2012-03-28 22:51 ` [PATCH 20/21] blkcg: pass around pd->pdata instead of pd itself in prfill functions Tejun Heo
2012-03-28 22:51 ` [PATCH 21/21] blkcg: drop BLKCG_STAT_{PRIV|POL|OFF} macros Tejun Heo
2012-03-29 8:18 ` [PATCHSET] block: modularize blkcg config and stat file handling Jens Axboe
2012-04-02 20:02 ` Tejun Heo
2012-04-02 21:51 ` Jens Axboe
2012-04-02 22:33 ` Tejun Heo
2012-04-01 19:38 ` Vivek Goyal
2012-04-01 21:42 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F7B32AE.7050900@tao.ma \
--to=tm@tao.ma \
--cc=axboe@kernel.dk \
--cc=cgroups@vger.kernel.org \
--cc=containers@lists.linux-foundation.org \
--cc=ctalbott@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=rni@google.com \
--cc=shli@kernel.org \
--cc=tj@kernel.org \
--cc=vgoyal@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).