From: Vivek Goyal <vgoyal@redhat.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org, jens.axboe@oracle.com,
containers@lists.linux-foundation.org, dm-devel@redhat.com,
nauman@google.com, dpshah@google.com, lizf@cn.fujitsu.com,
mikew@google.com, fchecconi@gmail.com, paolo.valente@unimore.it,
ryov@valinux.co.jp, fernando@oss.ntt.co.jp,
s-uchida@ap.jp.nec.com, taka@valinux.co.jp,
guijianfeng@cn.fujitsu.com, jmoyer@redhat.com,
dhaval@linux.vnet.ibm.com, balbir@linux.vnet.ibm.com,
righi.andrea@gmail.com, m-ikeda@ds.jp.nec.com, agk@redhat.com,
peterz@infradead.org, jmarchan@redhat.com,
torvalds@linux-foundation.org, mingo@elte.hu, riel@redhat.com
Subject: Re: IO scheduler based IO controller V10
Date: Fri, 25 Sep 2009 00:14:59 -0400 [thread overview]
Message-ID: <20090925041459.GA13744@redhat.com> (raw)
In-Reply-To: <20090925100952.55c2dd7a.kamezawa.hiroyu@jp.fujitsu.com>
On Fri, Sep 25, 2009 at 10:09:52AM +0900, KAMEZAWA Hiroyuki wrote:
> On Thu, 24 Sep 2009 14:33:15 -0700
> Andrew Morton <akpm@linux-foundation.org> wrote:
> > > Test5 (Fairness for async writes, Buffered Write Vs Buffered Write)
> > > ===================================================================
> > > Fairness for async writes is tricky and biggest reason is that async writes
> > > are cached in higher layers (page cahe) as well as possibly in file system
> > > layer also (btrfs, xfs etc), and are dispatched to lower layers not necessarily
> > > in proportional manner.
> > >
> > > For example, consider two dd threads reading /dev/zero as input file and doing
> > > writes of huge files. Very soon we will cross vm_dirty_ratio and dd thread will
> > > be forced to write out some pages to disk before more pages can be dirtied. But
> > > not necessarily dirty pages of same thread are picked. It can very well pick
> > > the inode of lesser priority dd thread and do some writeout. So effectively
> > > higher weight dd is doing writeouts of lower weight dd pages and we don't see
> > > service differentation.
> > >
> > > IOW, the core problem with buffered write fairness is that higher weight thread
> > > does not throw enought IO traffic at IO controller to keep the queue
> > > continuously backlogged. In my testing, there are many .2 to .8 second
> > > intervals where higher weight queue is empty and in that duration lower weight
> > > queue get lots of job done giving the impression that there was no service
> > > differentiation.
> > >
> > > In summary, from IO controller point of view async writes support is there.
> > > Because page cache has not been designed in such a manner that higher
> > > prio/weight writer can do more write out as compared to lower prio/weight
> > > writer, gettting service differentiation is hard and it is visible in some
> > > cases and not visible in some cases.
> >
> > Here's where it all falls to pieces.
> >
> > For async writeback we just don't care about IO priorities. Because
> > from the point of view of the userspace task, the write was async! It
> > occurred at memory bandwidth speed.
> >
> > It's only when the kernel's dirty memory thresholds start to get
> > exceeded that we start to care about prioritisation. And at that time,
> > all dirty memory (within a memcg?) is equal - a high-ioprio dirty page
> > consumes just as much memory as a low-ioprio dirty page.
> >
> > So when balance_dirty_pages() hits, what do we want to do?
> >
> > I suppose that all we can do is to block low-ioprio processes more
> > agressively at the VFS layer, to reduce the rate at which they're
> > dirtying memory so as to give high-ioprio processes more of the disk
> > bandwidth.
> >
> > But you've gone and implemented all of this stuff at the io-controller
> > level and not at the VFS level so you're, umm, screwed.
> >
>
> I think I must support dirty-ratio in memcg layer. But not yet.
> I can't easily imagine how the system will work if both dirty-ratio and
> io-controller cgroup are supported.
IIUC, you are suggesting per memeory cgroup dirty ratio and writer will be
throttled if dirty ratio is crossed. makes sense to me. Just that io
controller and memory controller shall have to me mounted together.
Thanks
Vivek
> But considering use them as a set of
> cgroup, called containers(zone?), it's will not be bad, I think.
>
> The final bottelneck queue for fairness in usual workload on usual (small)
> server will ext3's journal, I wonder ;)
>
> Thanks,
> -Kame
>
>
> > Importantly screwed! It's a very common workload pattern, and one
> > which causes tremendous amounts of IO to be generated very quickly,
> > traditionally causing bad latency effects all over the place. And we
> > have no answer to this.
> >
> > > Vanilla CFQ Vs IO Controller CFQ
> > > ================================
> > > We have not fundamentally changed CFQ, instead enhanced it to also support
> > > hierarchical io scheduling. In the process invariably there are small changes
> > > here and there as new scenarios come up. Running some tests here and comparing
> > > both the CFQ's to see if there is any major deviation in behavior.
> > >
> > > Test1: Sequential Readers
> > > =========================
> > > [fio --rw=read --bs=4K --size=2G --runtime=30 --direct=1 --numjobs=<1 to 16> ]
> > >
> > > IO scheduler: Vanilla CFQ
> > >
> > > nr Max-bdwidth Min-bdwidth Agg-bdwidth Max-latency
> > > 1 35499KiB/s 35499KiB/s 35499KiB/s 19195 usec
> > > 2 17089KiB/s 13600KiB/s 30690KiB/s 118K usec
> > > 4 9165KiB/s 5421KiB/s 29411KiB/s 380K usec
> > > 8 3815KiB/s 3423KiB/s 29312KiB/s 830K usec
> > > 16 1911KiB/s 1554KiB/s 28921KiB/s 1756K usec
> > >
> > > IO scheduler: IO controller CFQ
> > >
> > > nr Max-bdwidth Min-bdwidth Agg-bdwidth Max-latency
> > > 1 34494KiB/s 34494KiB/s 34494KiB/s 14482 usec
> > > 2 16983KiB/s 13632KiB/s 30616KiB/s 123K usec
> > > 4 9237KiB/s 5809KiB/s 29631KiB/s 372K usec
> > > 8 3901KiB/s 3505KiB/s 29162KiB/s 822K usec
> > > 16 1895KiB/s 1653KiB/s 28945KiB/s 1778K usec
> > >
> > > Test2: Sequential Writers
> > > =========================
> > > [fio --rw=write --bs=4K --size=2G --runtime=30 --direct=1 --numjobs=<1 to 16> ]
> > >
> > > IO scheduler: Vanilla CFQ
> > >
> > > nr Max-bdwidth Min-bdwidth Agg-bdwidth Max-latency
> > > 1 22669KiB/s 22669KiB/s 22669KiB/s 401K usec
> > > 2 14760KiB/s 7419KiB/s 22179KiB/s 571K usec
> > > 4 5862KiB/s 5746KiB/s 23174KiB/s 444K usec
> > > 8 3377KiB/s 2199KiB/s 22427KiB/s 1057K usec
> > > 16 2229KiB/s 556KiB/s 20601KiB/s 5099K usec
> > >
> > > IO scheduler: IO Controller CFQ
> > >
> > > nr Max-bdwidth Min-bdwidth Agg-bdwidth Max-latency
> > > 1 22911KiB/s 22911KiB/s 22911KiB/s 37319 usec
> > > 2 11752KiB/s 11632KiB/s 23383KiB/s 245K usec
> > > 4 6663KiB/s 5409KiB/s 23207KiB/s 384K usec
> > > 8 3161KiB/s 2460KiB/s 22566KiB/s 935K usec
> > > 16 1888KiB/s 795KiB/s 21349KiB/s 3009K usec
> > >
> > > Test3: Random Readers
> > > =========================
> > > [fio --rw=randread --bs=4K --size=2G --runtime=30 --direct=1 --numjobs=1 to 16]
> > >
> > > IO scheduler: Vanilla CFQ
> > >
> > > nr Max-bdwidth Min-bdwidth Agg-bdwidth Max-latency
> > > 1 484KiB/s 484KiB/s 484KiB/s 22596 usec
> > > 2 229KiB/s 196KiB/s 425KiB/s 51111 usec
> > > 4 119KiB/s 73KiB/s 405KiB/s 2344 msec
> > > 8 93KiB/s 23KiB/s 399KiB/s 2246 msec
> > > 16 38KiB/s 8KiB/s 328KiB/s 3965 msec
> > >
> > > IO scheduler: IO Controller CFQ
> > >
> > > nr Max-bdwidth Min-bdwidth Agg-bdwidth Max-latency
> > > 1 483KiB/s 483KiB/s 483KiB/s 29391 usec
> > > 2 229KiB/s 196KiB/s 426KiB/s 51625 usec
> > > 4 132KiB/s 88KiB/s 417KiB/s 2313 msec
> > > 8 79KiB/s 18KiB/s 389KiB/s 2298 msec
> > > 16 43KiB/s 9KiB/s 327KiB/s 3905 msec
> > >
> > > Test4: Random Writers
> > > =====================
> > > [fio --rw=randwrite --bs=4K --size=2G --runtime=30 --direct=1 --numjobs=1 to 16]
> > >
> > > IO scheduler: Vanilla CFQ
> > >
> > > nr Max-bdwidth Min-bdwidth Agg-bdwidth Max-latency
> > > 1 14641KiB/s 14641KiB/s 14641KiB/s 93045 usec
> > > 2 7896KiB/s 1348KiB/s 9245KiB/s 82778 usec
> > > 4 2657KiB/s 265KiB/s 6025KiB/s 216K usec
> > > 8 951KiB/s 122KiB/s 3386KiB/s 1148K usec
> > > 16 66KiB/s 22KiB/s 829KiB/s 1308 msec
> > >
> > > IO scheduler: IO Controller CFQ
> > >
> > > nr Max-bdwidth Min-bdwidth Agg-bdwidth Max-latency
> > > 1 14454KiB/s 14454KiB/s 14454KiB/s 74623 usec
> > > 2 4595KiB/s 4104KiB/s 8699KiB/s 135K usec
> > > 4 3113KiB/s 334KiB/s 5782KiB/s 200K usec
> > > 8 1146KiB/s 95KiB/s 3832KiB/s 593K usec
> > > 16 71KiB/s 29KiB/s 814KiB/s 1457 msec
> > >
> > > Notes:
> > > - Does not look like that anything has changed significantly.
> > >
> > > Previous versions of the patches were posted here.
> > > ------------------------------------------------
> > >
> > > (V1) http://lkml.org/lkml/2009/3/11/486
> > > (V2) http://lkml.org/lkml/2009/5/5/275
> > > (V3) http://lkml.org/lkml/2009/5/26/472
> > > (V4) http://lkml.org/lkml/2009/6/8/580
> > > (V5) http://lkml.org/lkml/2009/6/19/279
> > > (V6) http://lkml.org/lkml/2009/7/2/369
> > > (V7) http://lkml.org/lkml/2009/7/24/253
> > > (V8) http://lkml.org/lkml/2009/8/16/204
> > > (V9) http://lkml.org/lkml/2009/8/28/327
> > >
> > > Thanks
> > > Vivek
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
> >
next prev parent reply other threads:[~2009-09-25 4:16 UTC|newest]
Thread overview: 190+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-24 19:25 IO scheduler based IO controller V10 Vivek Goyal
2009-09-24 19:25 ` [PATCH 01/28] io-controller: Documentation Vivek Goyal
2009-09-24 19:25 ` [PATCH 02/28] io-controller: Core of the elevator fair queuing Vivek Goyal
2009-09-24 19:25 ` [PATCH 03/28] io-controller: Keep a cache of recently expired queues Vivek Goyal
2009-09-24 19:25 ` [PATCH 04/28] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-09-24 19:25 ` [PATCH 05/28] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-09-24 19:25 ` [PATCH 06/28] io-controller: Core scheduler changes to support hierarhical scheduling Vivek Goyal
2009-09-24 19:25 ` [PATCH 07/28] io-controller: cgroup related changes for hierarchical group support Vivek Goyal
2009-09-24 19:25 ` [PATCH 08/28] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-09-24 19:25 ` [PATCH 09/28] io-controller: cfq changes to use " Vivek Goyal
2009-09-24 19:25 ` [PATCH 10/28] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-09-24 19:25 ` [PATCH 11/28] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-09-24 19:25 ` [PATCH 12/28] io-controller: Introduce group idling Vivek Goyal
2009-09-24 19:25 ` [PATCH 13/28] io-controller: Implement wait busy for io queues Vivek Goyal
2009-09-24 19:25 ` [PATCH 14/28] io-controller: Keep track of late preemptions Vivek Goyal
2009-09-24 19:25 ` [PATCH 15/28] io-controller: Allow CFQ specific extra preemptions Vivek Goyal
2009-09-25 6:24 ` Gui Jianfeng
2009-09-24 19:25 ` [PATCH 16/28] io-controller: Wait for requests to complete from last queue before new queue is scheduled Vivek Goyal
2009-09-24 19:25 ` [PATCH 17/28] io-controller: Separate out queue and data Vivek Goyal
2009-09-24 19:25 ` [PATCH 18/28] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-09-24 19:25 ` [PATCH 19/28] io-controller: Avoid expiring ioq for single ioq scheduler if only root group Vivek Goyal
2009-09-24 19:25 ` [PATCH 20/28] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-09-24 19:25 ` [PATCH 21/28] io-controller: deadline " Vivek Goyal
2009-09-24 19:25 ` [PATCH 22/28] io-controller: anticipatory " Vivek Goyal
2009-09-24 19:25 ` [PATCH 23/28] io-controller: blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-09-24 19:25 ` [PATCH 24/28] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-09-24 19:25 ` [PATCH 25/28] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-09-24 19:25 ` [PATCH 26/28] io-controller: Per io group bdi congestion interface Vivek Goyal
2009-09-24 19:25 ` [PATCH 27/28] io-controller: Support per cgroup per device weights and io class Vivek Goyal
2009-09-24 19:25 ` [PATCH 28/28] io-controller: debug elevator fair queuing support Vivek Goyal
2009-09-24 21:33 ` IO scheduler based IO controller V10 Andrew Morton
2009-09-25 1:09 ` KAMEZAWA Hiroyuki
2009-09-25 1:18 ` KAMEZAWA Hiroyuki
2009-09-25 5:29 ` Balbir Singh
2009-09-25 7:09 ` Ryo Tsuruta
2009-09-25 4:14 ` Vivek Goyal [this message]
2009-09-25 5:04 ` Vivek Goyal
2009-09-25 9:07 ` Ryo Tsuruta
2009-09-25 14:33 ` Vivek Goyal
2009-09-28 7:30 ` Ryo Tsuruta
2009-09-25 15:04 ` Rik van Riel
2009-09-28 7:38 ` Ryo Tsuruta
2009-10-08 4:42 ` More performance numbers (Was: Re: IO scheduler based IO controller V10) Vivek Goyal
2009-10-08 8:34 ` Andrea Righi
2009-10-10 19:53 ` Performance numbers with IO throttling patches " Vivek Goyal
2009-10-10 22:27 ` Andrea Righi
2009-10-11 12:32 ` Vivek Goyal
2009-10-12 21:11 ` Vivek Goyal
2009-10-17 15:18 ` Andrea Righi
2009-09-25 2:20 ` IO scheduler based IO controller V10 Ulrich Lukas
2009-09-25 20:26 ` Vivek Goyal
2009-09-26 14:51 ` Mike Galbraith
2009-09-27 6:55 ` Mike Galbraith
2009-09-27 16:42 ` Jens Axboe
2009-09-27 18:15 ` Mike Galbraith
2009-09-28 4:04 ` Mike Galbraith
2009-09-28 5:55 ` Mike Galbraith
2009-09-28 17:48 ` Vivek Goyal
2009-09-28 18:24 ` Mike Galbraith
2009-09-30 19:58 ` Mike Galbraith
2009-09-30 20:05 ` Mike Galbraith
2009-09-30 20:24 ` Vivek Goyal
2009-10-01 7:33 ` Mike Galbraith
2009-10-01 18:58 ` Jens Axboe
2009-10-02 6:23 ` Mike Galbraith
2009-10-02 8:04 ` Jens Axboe
2009-10-02 8:53 ` Mike Galbraith
2009-10-02 9:00 ` Mike Galbraith
2009-10-02 9:55 ` Jens Axboe
2009-10-02 12:22 ` Mike Galbraith
2009-10-02 9:24 ` Ingo Molnar
2009-10-02 9:28 ` Jens Axboe
2009-10-02 14:24 ` Linus Torvalds
2009-10-02 14:45 ` Mike Galbraith
2009-10-02 14:57 ` Jens Axboe
2009-10-02 14:56 ` Jens Axboe
2009-10-02 15:14 ` Linus Torvalds
2009-10-02 16:01 ` jim owens
2009-10-02 17:11 ` Jens Axboe
2009-10-02 17:20 ` Ingo Molnar
2009-10-02 17:25 ` Jens Axboe
2009-10-02 17:28 ` Ingo Molnar
2009-10-02 17:37 ` Jens Axboe
2009-10-02 17:56 ` Ingo Molnar
2009-10-02 18:04 ` Jens Axboe
2009-10-02 18:22 ` Mike Galbraith
2009-10-02 18:26 ` Jens Axboe
2009-10-02 18:33 ` Mike Galbraith
2009-10-02 18:36 ` Theodore Tso
2009-10-02 18:45 ` Jens Axboe
2009-10-02 19:01 ` Ingo Molnar
2009-10-02 19:09 ` Jens Axboe
2009-10-02 18:13 ` Mike Galbraith
2009-10-02 18:19 ` Jens Axboe
2009-10-02 18:57 ` Mike Galbraith
2009-10-02 20:47 ` Mike Galbraith
2009-10-03 5:48 ` Mike Galbraith
2009-10-03 5:56 ` Mike Galbraith
2009-10-03 6:31 ` tweaking IO latency [was Re: IO scheduler based IO controller V10] Mike Galbraith
2009-10-03 7:24 ` IO scheduler based IO controller V10 Jens Axboe
2009-10-03 9:00 ` Mike Galbraith
2009-10-03 9:12 ` Corrado Zoccolo
2009-10-03 13:18 ` Jens Axboe
2009-10-03 13:17 ` Jens Axboe
2009-10-03 11:29 ` Vivek Goyal
2009-10-03 12:40 ` Do not overload dispatch queue (Was: Re: IO scheduler based IO controller V10) Vivek Goyal
2009-10-03 13:21 ` Jens Axboe
2009-10-03 13:56 ` Vivek Goyal
2009-10-03 14:02 ` Mike Galbraith
2009-10-03 14:28 ` Jens Axboe
2009-10-03 14:33 ` Mike Galbraith
2009-10-03 14:51 ` Mike Galbraith
2009-10-03 15:14 ` Jens Axboe
2009-10-03 15:57 ` Mike Galbraith
2009-10-03 17:35 ` Jens Axboe
2009-10-03 17:45 ` Linus Torvalds
2009-10-03 17:51 ` Jens Axboe
2009-10-03 19:07 ` Mike Galbraith
2009-10-03 19:11 ` Mike Galbraith
2009-10-03 19:23 ` Jens Axboe
2009-10-03 19:49 ` Mike Galbraith
2009-10-04 10:50 ` Mike Galbraith
2009-10-04 11:33 ` Mike Galbraith
2009-10-04 17:39 ` Jens Axboe
2009-10-04 18:23 ` Mike Galbraith
2009-10-04 18:38 ` Jens Axboe
2009-10-04 19:47 ` Mike Galbraith
2009-10-04 20:17 ` Jens Axboe
2009-10-04 22:15 ` Mike Galbraith
2009-10-03 13:57 ` Mike Galbraith
2009-10-03 7:20 ` IO scheduler based IO controller V10 Ingo Molnar
2009-10-03 7:25 ` Jens Axboe
2009-10-03 8:53 ` Mike Galbraith
2009-10-03 9:01 ` Corrado Zoccolo
2009-10-02 16:33 ` Ray Lee
2009-10-02 17:13 ` Jens Axboe
2009-10-02 16:22 ` Ingo Molnar
2009-10-02 9:36 ` Mike Galbraith
2009-10-02 16:37 ` Ingo Molnar
2009-10-02 18:08 ` Jens Axboe
2009-10-02 18:29 ` Mike Galbraith
2009-10-02 18:36 ` Jens Axboe
2009-09-27 17:00 ` Corrado Zoccolo
2009-09-28 14:56 ` Vivek Goyal
2009-09-28 15:35 ` Corrado Zoccolo
2009-09-28 17:14 ` Vivek Goyal
2009-09-29 7:10 ` Corrado Zoccolo
2009-09-28 17:51 ` Mike Galbraith
2009-09-28 18:18 ` Vivek Goyal
2009-09-28 18:53 ` Mike Galbraith
2009-09-29 7:14 ` Corrado Zoccolo
2009-09-29 5:55 ` Mike Galbraith
2009-09-29 0:37 ` Nauman Rafique
2009-09-29 3:22 ` Vivek Goyal
2009-09-29 9:56 ` Ryo Tsuruta
2009-09-29 10:49 ` Takuya Yoshikawa
2009-09-29 14:10 ` Vivek Goyal
2009-09-29 19:53 ` Nauman Rafique
2009-09-30 8:43 ` Ryo Tsuruta
2009-09-30 11:05 ` Vivek Goyal
2009-10-01 6:41 ` Ryo Tsuruta
2009-10-01 13:31 ` Vivek Goyal
2009-10-02 2:57 ` Vivek Goyal
2009-10-02 20:27 ` Munehiro Ikeda
2009-10-05 10:38 ` Ryo Tsuruta
2009-10-05 12:31 ` Vivek Goyal
2009-10-05 14:55 ` Ryo Tsuruta
2009-10-05 17:10 ` Vivek Goyal
2009-10-05 18:11 ` Nauman Rafique
2009-10-06 7:17 ` Ryo Tsuruta
2009-10-06 11:22 ` Vivek Goyal
2009-10-07 14:38 ` Ryo Tsuruta
2009-10-07 15:09 ` Vivek Goyal
2009-10-08 2:18 ` Ryo Tsuruta
2009-10-07 16:41 ` Rik van Riel
2009-10-08 10:22 ` Ryo Tsuruta
2009-09-30 3:11 ` Vivek Goyal
-- strict thread matches above, loose matches on Subject: below --
2009-10-02 10:55 Corrado Zoccolo
2009-10-02 11:04 ` Jens Axboe
2009-10-02 12:49 ` Vivek Goyal
2009-10-02 15:27 ` Corrado Zoccolo
2009-10-02 15:31 ` Vivek Goyal
2009-10-02 15:32 ` Mike Galbraith
2009-10-02 15:40 ` Vivek Goyal
2009-10-02 16:03 ` Mike Galbraith
2009-10-02 16:50 ` Valdis.Kletnieks
2009-10-02 19:58 ` Vivek Goyal
2009-10-02 22:14 ` Corrado Zoccolo
2009-10-02 22:27 ` Vivek Goyal
2009-10-03 12:43 ` Corrado Zoccolo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090925041459.GA13744@redhat.com \
--to=vgoyal@redhat.com \
--cc=agk@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=balbir@linux.vnet.ibm.com \
--cc=containers@lists.linux-foundation.org \
--cc=dhaval@linux.vnet.ibm.com \
--cc=dm-devel@redhat.com \
--cc=dpshah@google.com \
--cc=fchecconi@gmail.com \
--cc=fernando@oss.ntt.co.jp \
--cc=guijianfeng@cn.fujitsu.com \
--cc=jens.axboe@oracle.com \
--cc=jmarchan@redhat.com \
--cc=jmoyer@redhat.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lizf@cn.fujitsu.com \
--cc=m-ikeda@ds.jp.nec.com \
--cc=mikew@google.com \
--cc=mingo@elte.hu \
--cc=nauman@google.com \
--cc=paolo.valente@unimore.it \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=righi.andrea@gmail.com \
--cc=ryov@valinux.co.jp \
--cc=s-uchida@ap.jp.nec.com \
--cc=taka@valinux.co.jp \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).