public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrea Righi <righi.andrea@gmail.com>
To: Paul Menage <menage@google.com>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>,
	xen-devel@lists.xensource.com,
	containers@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, dm-devel@redhat.com,
	agk@sourceware.org
Subject: Re: Too many I/O controller patches
Date: Tue,  5 Aug 2008 11:27:52 +0200 (MEST)	[thread overview]
Message-ID: <48981D18.2070006@gmail.com> (raw)
In-Reply-To: <6599ad830808042255y59215481l5463d4dca9fb2001@mail.gmail.com>

Paul Menage wrote:
> On Mon, Aug 4, 2008 at 1:44 PM, Andrea Righi <righi.andrea@gmail.com> wrote:
>> A safer approach IMHO is to force the tasks to wait synchronously on
>> each operation that directly or indirectly generates i/o.
>>
>> In particular the solution used by the io-throttle controller to limit
>> the dirty-ratio in memory is to impose a sleep via
>> schedule_timeout_killable() in balance_dirty_pages() when a generic
>> process exceeds the limits defined for the belonging cgroup.
>>
>> Limiting read operations is a lot more easy, because they're always
>> synchronized with i/o requests.
> 
> I think that you're conflating two issues:
> 
> - controlling how much dirty memory a cgroup can have at any given
> time (since dirty memory is much harder/slower to reclaim than clean
> memory)
> 
> - controlling how much effect a cgroup can have on a given I/O device.
> 
> By controlling the rate at which a task can generate dirty pages,
> you're not really limiting either of these. I think you'd have to set
> your I/O limits artificially low to prevent a case of a process
> writing a large data file and then doing fsync() on it, which would
> then hit the disk with the entire file at once, and blow away any QoS
> guarantees for other groups.

Anyway, dirty pages ratio is directly proportional to the IO that will
be performed on the real device, isn't it? this wouldn't prevent IO
bursts as you correctly say, but IMHO it is a simple and quite effective
way to measure the IO write activity of each cgroup on each affected
device.

To prevent the IO peaks I usually reduce the vm_dirty_ratio, but, ok,
this is a workaround, not the solution to the problem either.

IMHO, based on the dirty-page rate measurement, we should apply both
limiting methods: throttle dirty-pages ratio to prevent too many dirty
pages in the system (harde to reclaim and generating
unpredictable/unpleasant/unresponsiveness behaviour), and throttle the
dispatching of IO requests at the device-mapper/IO-scheduler layer to
smooth IO peaks/bursts, generated by fsync() and similar scenarios.

Another different approach could be to implement the measurement in the
elevator, looking at the elapsed between the IO request is issued to the
drive and the request is served. So, look at the start time T1,
completion time T2, take the difference (T2 - T1) and say: cgroup C1
consumed an amount of IO of (T2 - T1), and also use a token-bucket
policy to fill/reduce the "credits" of each IO cgroup in terms of IO
time slots. This would be a more precise measurement, instead of trying
to predict how expensive the IO operation will be, only looking at the
dirty-page ratio. Then throttle both dirty-page ratio *and* the
dispatching of the IO requests submitted by the cgroup that exceeds the
limits.

> 
> As Dave suggested, I think it would make more sense to have your
> page-dirtying throttle points hook into the memory controller instead,
> and allow the memory controller to track/limit dirty pages for a
> cgroup, and potentially do throttling as part of that.
> 
> Paul

Yes, implementing page-drity throttling in memory controller seems
absolutely reasonable. I can try to move in this direction, merge the
page-dirty throttling in memory controller and also post the RFC.

Thanks,
-Andrea

  parent reply	other threads:[~2008-08-05 11:20 UTC|newest]

Thread overview: 79+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-08-04  8:51 [PATCH 0/7] I/O bandwidth controller and BIO tracking Ryo Tsuruta
2008-08-04  8:52 ` [PATCH 1/7] dm-ioband: Patch of device-mapper driver Ryo Tsuruta
2008-08-04  8:52   ` [PATCH 2/7] dm-ioband: Documentation of design overview, installation, command reference and examples Ryo Tsuruta
2008-08-04  8:57     ` [PATCH 3/7] bio-cgroup: Introduction Ryo Tsuruta
2008-08-04  8:57       ` [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts Ryo Tsuruta
2008-08-04  8:59         ` [PATCH 5/7] bio-cgroup: Remove a lot of ifdefs Ryo Tsuruta
2008-08-04  9:00           ` [PATCH 6/7] bio-cgroup: Implement the bio-cgroup Ryo Tsuruta
2008-08-04  9:01             ` [PATCH 7/7] bio-cgroup: Add a cgroup support to dm-ioband Ryo Tsuruta
2008-08-08  7:10             ` [PATCH 6/7] bio-cgroup: Implement the bio-cgroup Takuya Yoshikawa
2008-08-08  8:30               ` Ryo Tsuruta
2008-08-08  9:42                 ` Takuya Yoshikawa
2008-08-08 11:41                   ` Ryo Tsuruta
2008-08-05 10:25         ` [PATCH 4/7] bio-cgroup: Split the cgroup memory subsystem into two parts Andrea Righi
2008-08-05 10:35           ` Hirokazu Takahashi
2008-08-06  7:54         ` KAMEZAWA Hiroyuki
2008-08-06 11:43           ` Hirokazu Takahashi
2008-08-06 13:45             ` kamezawa.hiroyu
2008-08-07  7:25               ` Hirokazu Takahashi
2008-08-07  8:21                 ` KAMEZAWA Hiroyuki
2008-08-07  8:45                   ` Hirokazu Takahashi
2008-08-04 17:20 ` Too many I/O controller patches Dave Hansen
2008-08-04 18:22   ` Andrea Righi
2008-08-04 19:02     ` Dave Hansen
2008-08-04 20:44       ` Andrea Righi
2008-08-04 20:50         ` Dave Hansen
2008-08-05  6:28           ` Hirokazu Takahashi
2008-08-05  5:55         ` Paul Menage
2008-08-05  6:03           ` Balbir Singh
2008-08-05  9:27           ` Andrea Righi [this message]
2008-08-05 16:25           ` Dave Hansen
2008-08-05  6:16         ` Hirokazu Takahashi
2008-08-05  9:31           ` Andrea Righi
2008-08-05 10:01             ` Hirokazu Takahashi
2008-08-05  2:50     ` Satoshi UCHIDA
2008-08-05  9:28       ` Andrea Righi
2008-08-05 13:17         ` Ryo Tsuruta
2008-08-05 16:20         ` Dave Hansen
2008-08-06  2:44           ` KAMEZAWA Hiroyuki
2008-08-06  3:30             ` Balbir Singh
2008-08-06  6:48             ` Hirokazu Takahashi
2008-08-05 12:01       ` Hirokazu Takahashi
2008-08-04 18:34   ` Balbir Singh
2008-08-04 20:42     ` Andrea Righi
2008-08-06  1:13   ` RFC: I/O bandwidth controller (was Re: Too many I/O controller patches) Fernando Luis Vázquez Cao
2008-08-06  6:18     ` RFC: I/O bandwidth controller Ryo Tsuruta
2008-08-06  6:41       ` Fernando Luis Vázquez Cao
2008-08-06 15:48         ` Dave Hansen
2008-08-07  4:38           ` Fernando Luis Vázquez Cao
2008-08-06 16:42     ` RFC: I/O bandwidth controller (was Re: Too many I/O controller patches) Balbir Singh
2008-08-06 18:00       ` Dave Hansen
2008-08-07  2:44       ` Fernando Luis Vázquez Cao
2008-08-07  3:01       ` Fernando Luis Vázquez Cao
2008-08-08 11:39         ` RFC: I/O bandwidth controller Hirokazu Takahashi
2008-08-12  5:35           ` Fernando Luis Vázquez Cao
2008-08-06 19:37     ` RFC: I/O bandwidth controller (was Re: Too many I/O controller patches) Naveen Gupta
2008-08-07  8:30       ` RFC: I/O bandwidth controller Hirokazu Takahashi
2008-08-07 13:17       ` RFC: I/O bandwidth controller (was Re: Too many I/O controller patches) Fernando Luis Vázquez Cao
2008-08-11 18:18         ` Naveen Gupta
2008-08-11 16:35           ` David Collier-Brown
2008-08-07  7:46     ` Andrea Righi
2008-08-07 13:59       ` Fernando Luis Vázquez Cao
2008-08-11 20:52         ` Andrea Righi
     [not found]           ` <loom.20080812T071504-212@post.gmane.org>
2008-08-12 11:10             ` RFC: I/O bandwidth controller Hirokazu Takahashi
2008-08-12 12:55               ` Andrea Righi
2008-08-12 13:07                 ` Andrea Righi
2008-08-12 13:54                   ` Fernando Luis Vázquez Cao
2008-08-12 15:03                     ` James.Smart
2008-08-12 21:00                       ` Andrea Righi
2008-08-12 20:44                     ` Andrea Righi
2008-08-13  7:47                       ` Dong-Jae Kang
2008-08-13 17:56                         ` Andrea Righi
2008-08-14 11:18                 ` David Collier-Brown
2008-08-12 13:15               ` Fernando Luis Vázquez Cao
2008-08-13  6:23               ` 강동재
2008-08-08  6:21     ` Hirokazu Takahashi
2008-08-08  7:20       ` Ryo Tsuruta
2008-08-08  8:10         ` Fernando Luis Vázquez Cao
2008-08-08 10:05           ` Ryo Tsuruta
2008-08-08 14:31       ` Hirokazu Takahashi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=48981D18.2070006@gmail.com \
    --to=righi.andrea@gmail.com \
    --cc=agk@sourceware.org \
    --cc=containers@lists.linux-foundation.org \
    --cc=dave@linux.vnet.ibm.com \
    --cc=dm-devel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=menage@google.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox