linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Andrea Righi <righi.andrea@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	nauman@google.com, dpshah@google.com, lizf@cn.fujitsu.com,
	mikew@google.com, fchecconi@gmail.com, paolo.valente@unimore.it,
	jens.axboe@oracle.com, ryov@valinux.co.jp,
	fernando@oss.ntt.co.jp, s-uchida@ap.jp.nec.com,
	taka@valinux.co.jp, guijianfeng@cn.fujitsu.com,
	jmoyer@redhat.com, dhaval@linux.vnet.ibm.com,
	balbir@linux.vnet.ibm.com, linux-kernel@vger.kernel.org,
	containers@lists.linux-foundation.org, agk@redhat.com,
	dm-devel@redhat.com, snitzer@redhat.com, m-ikeda@ds.jp.nec.com,
	peterz@infradead.org
Subject: Re: IO scheduler based IO Controller V2
Date: Thu, 7 May 2009 10:11:26 -0400	[thread overview]
Message-ID: <20090507141126.GA9463@redhat.com> (raw)
In-Reply-To: <20090507090450.GA4613@linux>

On Thu, May 07, 2009 at 11:04:50AM +0200, Andrea Righi wrote:
> On Wed, May 06, 2009 at 05:52:35PM -0400, Vivek Goyal wrote:
> > > > Without io-throttle patches
> > > > ---------------------------
> > > > - Two readers, first BE prio 7, second BE prio 0
> > > > 
> > > > 234179072 bytes (234 MB) copied, 4.12074 s, 56.8 MB/s
> > > > High prio reader finished
> > > > 234179072 bytes (234 MB) copied, 5.36023 s, 43.7 MB/s
> > > > 
> > > > Note: There is no service differentiation between prio 0 and prio 7 task
> > > >       with io-throttle patches.
> > > > 
> > > > Test 3
> > > > ======
> > > > - Run the one RT reader and one BE reader in root cgroup without any
> > > >   limitations. I guess this should mean unlimited BW and behavior should
> > > >   be same as with CFQ without io-throttling patches.
> > > > 
> > > > With io-throttle patches
> > > > =========================
> > > > Ran the test 4 times because I was getting different results in different
> > > > runs.
> > > > 
> > > > - Two readers, one RT prio 0  other BE prio 7
> > > > 
> > > > 234179072 bytes (234 MB) copied, 2.74604 s, 85.3 MB/s
> > > > 234179072 bytes (234 MB) copied, 5.20995 s, 44.9 MB/s
> > > > RT task finished
> > > > 
> > > > 234179072 bytes (234 MB) copied, 4.54417 s, 51.5 MB/s
> > > > RT task finished
> > > > 234179072 bytes (234 MB) copied, 5.23396 s, 44.7 MB/s
> > > > 
> > > > 234179072 bytes (234 MB) copied, 5.17727 s, 45.2 MB/s
> > > > RT task finished
> > > > 234179072 bytes (234 MB) copied, 5.25894 s, 44.5 MB/s
> > > > 
> > > > 234179072 bytes (234 MB) copied, 2.74141 s, 85.4 MB/s
> > > > 234179072 bytes (234 MB) copied, 5.20536 s, 45.0 MB/s
> > > > RT task finished
> > > > 
> > > > Note: Out of 4 runs, looks like twice it is complete priority inversion
> > > >       and RT task finished after BE task. Rest of the two times, the
> > > >       difference between BW of RT and BE task is much less as compared to
> > > >       without patches. In fact once it was almost same.
> > > 
> > > This is strange. If you don't set any limit there shouldn't be any
> > > difference respect to the other case (without io-throttle patches).
> > > 
> > > At worst a small overhead given by the task_to_iothrottle(), under
> > > rcu_read_lock(). I'll repeat this test ASAP and see if I'll be able to
> > > reproduce this strange behaviour.
> > 
> > Ya, I also found this strange. At least in root group there should not be
> > any behavior change (at max one might expect little drop in throughput
> > because of extra code).
> 
> Hi Vivek,
> 
> I'm not able to reproduce the strange behaviour above.
> 
> Which commands are you running exactly? is the system isolated (stupid
> question) no cron or background tasks doing IO during the tests?
> 
> Following the script I've used:
> 
> $ cat test.sh
> #!/bin/sh
> echo 3 > /proc/sys/vm/drop_caches
> ionice -c 1 -n 0 dd if=bigfile1 of=/dev/null bs=1M 2>&1 | sed "s/\(.*\)/RT: \1/" &
> cat /proc/$!/cgroup | sed "s/\(.*\)/RT: \1/"
> ionice -c 2 -n 7 dd if=bigfile2 of=/dev/null bs=1M 2>&1 | sed "s/\(.*\)/BE: \1/" &
> cat /proc/$!/cgroup | sed "s/\(.*\)/BE: \1/"
> for i in 1 2; do
> 	wait
> done
> 
> And the results on my PC:
> 

[..]

> The difference seems to be just the expected overhead.

Hm..., something is really amiss here. I took your scripts and ran on
my system and I still see the issue. There is nothing else running on the
system and it is isolated.

2.6.30-rc4 + io-throttle patches V16
===================================
It is freshly booted system with nothing extra running on it. This is a
4 core system.

Disk1
=====
This is a fast disk which supports queue depth of 31.

Following is the output picked from dmesg for my device properties.
[    3.016099] sd 2:0:0:0: [sdb] 488397168 512-byte hardware sectors: (250
GB/232 GiB)
[    3.016188] sd 2:0:0:0: Attached scsi generic sg2 type 0

Following are the results of 4 runs of your script. (Just changed the 
script to read right file on my system if=/mnt/sdb/zerofile1).

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 4.38435 s, 53.4 MB/s
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 5.20706 s, 45.0 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 5.12953 s, 45.7 MB/s
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 5.23573 s, 44.7 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 3.54644 s, 66.0 MB/s
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 5.19406 s, 45.1 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 5.21908 s, 44.9 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 5.23802 s, 44.7 MB/s

Disk2
=====
This is a relatively slower disk with no command queuing.

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 7.06471 s, 33.1 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 8.01571 s, 29.2 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 7.89043 s, 29.7 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 8.03428 s, 29.1 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 7.38942 s, 31.7 MB/s
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 8.01146 s, 29.2 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 7.78351 s, 30.1 MB/s
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 8.06292 s, 29.0 MB/s

Disk3
=====
This is an Intel SSD.

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 0.993735 s, 236 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 1.98772 s, 118 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 1.8616 s, 126 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 1.98499 s, 118 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 1.01174 s, 231 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 1.99143 s, 118 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 1.96132 s, 119 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 1.97746 s, 118 MB/s

Results without io-throttle patches (vanilla 2.6.30-rc4)
========================================================

Disk 1
======
This is relatively faster SATA drive with command queuing enabled.

RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 2.84065 s, 82.4 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 5.30087 s, 44.2 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 2.69688 s, 86.8 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 5.18175 s, 45.2 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 2.73279 s, 85.7 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 5.21803 s, 44.9 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 2.69304 s, 87.0 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 5.17821 s, 45.2 MB/s

Disk 2
======
Slower disk with no command queuing.

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 4.29453 s, 54.5 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 8.04978 s, 29.1 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 3.96924 s, 59.0 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 7.74984 s, 30.2 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 4.11254 s, 56.9 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 7.8678 s, 29.8 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 3.95979 s, 59.1 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 7.73976 s, 30.3 MB/s

Disk3
=====
Intel SSD

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 0.996762 s, 235 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 1.93268 s, 121 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 0.98511 s, 238 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 1.92481 s, 122 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 0.986981 s, 237 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 1.9312 s, 121 MB/s

[root@chilli io-throttle-tests]# ./andrea-test-script.sh 
RT: 223+1 records in
RT: 223+1 records out
RT: 234179072 bytes (234 MB) copied, 0.988448 s, 237 MB/s
BE: 223+1 records in
BE: 223+1 records out
BE: 234179072 bytes (234 MB) copied, 1.93885 s, 121 MB/s

So I am still seeing the issue with differnt kind of disks also. At this point
of time I am really not sure why I am seeing such results.

I have following patches applied on 30-rc4 (V16).

3954-vivek.goyal2008-res_counter-introduce-ratelimiting-attributes.patch
3955-vivek.goyal2008-page_cgroup-provide-a-generic-page-tracking-infrastructure.patch
3956-vivek.goyal2008-io-throttle-controller-infrastructure.patch
3957-vivek.goyal2008-kiothrottled-throttle-buffered-io.patch
3958-vivek.goyal2008-io-throttle-instrumentation.patch
3959-vivek.goyal2008-io-throttle-export-per-task-statistics-to-userspace.patch

Thanks
Vivek

  parent reply	other threads:[~2009-05-07 14:15 UTC|newest]

Thread overview: 133+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-05 19:58 IO scheduler based IO Controller V2 Vivek Goyal
2009-05-05 19:58 ` [PATCH 01/18] io-controller: Documentation Vivek Goyal
2009-05-06  3:16   ` Gui Jianfeng
2009-05-06 13:31     ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 02/18] io-controller: Common flat fair queuing code in elevaotor layer Vivek Goyal
2009-05-22  6:43   ` Gui Jianfeng
2009-05-22 12:32     ` Vivek Goyal
2009-05-23 20:04       ` Jens Axboe
2009-05-05 19:58 ` [PATCH 03/18] io-controller: Charge for time slice based on average disk rate Vivek Goyal
2009-05-05 19:58 ` [PATCH 04/18] io-controller: Modify cfq to make use of flat elevator fair queuing Vivek Goyal
2009-05-22  8:54   ` Gui Jianfeng
2009-05-22 12:33     ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 05/18] io-controller: Common hierarchical fair queuing code in elevaotor layer Vivek Goyal
2009-05-07  7:42   ` Gui Jianfeng
2009-05-07  8:05     ` Li Zefan
2009-05-08 12:45     ` Vivek Goyal
2009-05-08 21:09   ` Andrea Righi
2009-05-08 21:17     ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 06/18] io-controller: cfq changes to use " Vivek Goyal
2009-05-05 19:58 ` [PATCH 07/18] io-controller: Export disk time used and nr sectors dipatched through cgroups Vivek Goyal
2009-05-13  2:39   ` Gui Jianfeng
2009-05-13 14:51     ` Vivek Goyal
2009-05-14  7:53       ` Gui Jianfeng
2009-05-05 19:58 ` [PATCH 08/18] io-controller: idle for sometime on sync queue before expiring it Vivek Goyal
2009-05-13 15:00   ` Vivek Goyal
2009-06-09  7:56   ` Gui Jianfeng
2009-06-09 17:51     ` Vivek Goyal
2009-06-10  1:30       ` Gui Jianfeng
2009-06-10 13:26         ` Vivek Goyal
2009-06-11  1:22           ` Gui Jianfeng
2009-05-05 19:58 ` [PATCH 09/18] io-controller: Separate out queue and data Vivek Goyal
2009-05-05 19:58 ` [PATCH 10/18] io-conroller: Prepare elevator layer for single queue schedulers Vivek Goyal
2009-05-05 19:58 ` [PATCH 11/18] io-controller: noop changes for hierarchical fair queuing Vivek Goyal
2009-05-05 19:58 ` [PATCH 12/18] io-controller: deadline " Vivek Goyal
2009-05-05 19:58 ` [PATCH 13/18] io-controller: anticipatory " Vivek Goyal
2009-05-05 19:58 ` [PATCH 14/18] blkio_cgroup patches from Ryo to track async bios Vivek Goyal
2009-05-05 19:58 ` [PATCH 15/18] io-controller: map async requests to appropriate cgroup Vivek Goyal
2009-05-05 19:58 ` [PATCH 16/18] io-controller: Per cgroup request descriptor support Vivek Goyal
2009-05-05 19:58 ` [PATCH 17/18] io-controller: IO group refcounting support Vivek Goyal
2009-05-08  2:59   ` Gui Jianfeng
2009-05-08 12:44     ` Vivek Goyal
2009-05-05 19:58 ` [PATCH 18/18] io-controller: Debug hierarchical IO scheduling Vivek Goyal
2009-05-06 21:40   ` IKEDA, Munehiro
2009-05-06 21:58     ` Vivek Goyal
2009-05-06 22:19       ` IKEDA, Munehiro
2009-05-06 22:24         ` Vivek Goyal
2009-05-06 23:01           ` IKEDA, Munehiro
2009-05-05 20:24 ` IO scheduler based IO Controller V2 Andrew Morton
2009-05-05 22:20   ` Peter Zijlstra
2009-05-06  3:42     ` Balbir Singh
2009-05-06 10:20       ` Fabio Checconi
2009-05-06 17:10         ` Balbir Singh
2009-05-06 18:47       ` Divyesh Shah
2009-05-06 20:42       ` Andrea Righi
2009-05-06  2:33   ` Vivek Goyal
2009-05-06 17:59     ` Nauman Rafique
2009-05-06 20:07     ` Andrea Righi
2009-05-06 21:21       ` Vivek Goyal
2009-05-06 22:02         ` Andrea Righi
2009-05-06 22:17           ` Vivek Goyal
2009-05-06 20:32     ` Vivek Goyal
2009-05-06 21:34       ` Andrea Righi
2009-05-06 21:52         ` Vivek Goyal
2009-05-06 22:35           ` Andrea Righi
2009-05-07  1:48             ` Ryo Tsuruta
2009-05-07  9:04           ` Andrea Righi
2009-05-07 12:22             ` Andrea Righi
2009-05-07 14:11             ` Vivek Goyal [this message]
2009-05-07 14:45               ` Vivek Goyal
2009-05-07 15:36                 ` Vivek Goyal
2009-05-07 15:42                   ` Vivek Goyal
2009-05-07 22:19                   ` Andrea Righi
2009-05-08 18:09                     ` Vivek Goyal
2009-05-08 20:05                       ` Andrea Righi
2009-05-08 21:56                         ` Vivek Goyal
2009-05-09  9:22                           ` Peter Zijlstra
2009-05-14 10:31                           ` Andrea Righi
2009-05-14 16:43                           ` Dhaval Giani
2009-05-07 22:40                 ` Andrea Righi
2009-05-07  0:18     ` Ryo Tsuruta
2009-05-07  1:25       ` Vivek Goyal
2009-05-11 11:23         ` Ryo Tsuruta
2009-05-11 12:49           ` Vivek Goyal
2009-05-08 14:24       ` Rik van Riel
2009-05-11 10:11         ` Ryo Tsuruta
2009-05-06  3:41   ` Balbir Singh
2009-05-06 13:28     ` Vivek Goyal
2009-05-06  8:11 ` Gui Jianfeng
2009-05-06 16:10   ` Vivek Goyal
2009-05-07  5:36     ` Li Zefan
2009-05-08 13:37       ` Vivek Goyal
2009-05-11  2:59         ` Gui Jianfeng
2009-05-07  5:47     ` Gui Jianfeng
2009-05-08  9:45 ` [PATCH] io-controller: Add io group reference handling for request Gui Jianfeng
2009-05-08 13:57   ` Vivek Goyal
2009-05-08 17:41     ` Nauman Rafique
2009-05-08 18:56       ` Vivek Goyal
2009-05-08 19:06         ` Nauman Rafique
2009-05-11  1:33       ` Gui Jianfeng
2009-05-11 15:41         ` Vivek Goyal
2009-05-15  5:15           ` Gui Jianfeng
2009-05-15  7:48             ` Andrea Righi
2009-05-15  8:16               ` Gui Jianfeng
2009-05-15 14:09                 ` Vivek Goyal
2009-05-15 14:06               ` Vivek Goyal
2009-05-17 10:26                 ` Andrea Righi
2009-05-18 14:01                   ` Vivek Goyal
2009-05-18 14:39                     ` Andrea Righi
2009-05-26 11:34                       ` Ryo Tsuruta
2009-05-27  6:56                         ` Ryo Tsuruta
2009-05-27  8:17                           ` Andrea Righi
2009-05-27 11:53                             ` Ryo Tsuruta
2009-05-27 17:32                           ` Vivek Goyal
2009-05-19 12:18                     ` Ryo Tsuruta
2009-05-15  7:40           ` Gui Jianfeng
2009-05-15 14:01             ` Vivek Goyal
2009-05-13  2:00 ` [PATCH] IO Controller: Add per-device weight and ioprio_class handling Gui Jianfeng
2009-05-13 14:44   ` Vivek Goyal
2009-05-14  0:59     ` Gui Jianfeng
2009-05-13 15:29   ` Vivek Goyal
2009-05-14  1:02     ` Gui Jianfeng
2009-05-13 15:59   ` Vivek Goyal
2009-05-14  1:51     ` Gui Jianfeng
2009-05-14  2:25     ` Gui Jianfeng
2009-05-13 17:17   ` Vivek Goyal
2009-05-14  1:24     ` Gui Jianfeng
2009-05-13 19:09   ` Vivek Goyal
2009-05-14  1:35     ` Gui Jianfeng
2009-05-14  7:26     ` Gui Jianfeng
2009-05-14 15:15       ` Vivek Goyal
2009-05-18 22:33       ` IKEDA, Munehiro
2009-05-20  1:44         ` Gui Jianfeng
2009-05-20 15:41           ` IKEDA, Munehiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090507141126.GA9463@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=agk@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=containers@lists.linux-foundation.org \
    --cc=dhaval@linux.vnet.ibm.com \
    --cc=dm-devel@redhat.com \
    --cc=dpshah@google.com \
    --cc=fchecconi@gmail.com \
    --cc=fernando@oss.ntt.co.jp \
    --cc=guijianfeng@cn.fujitsu.com \
    --cc=jens.axboe@oracle.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizf@cn.fujitsu.com \
    --cc=m-ikeda@ds.jp.nec.com \
    --cc=mikew@google.com \
    --cc=nauman@google.com \
    --cc=paolo.valente@unimore.it \
    --cc=peterz@infradead.org \
    --cc=righi.andrea@gmail.com \
    --cc=ryov@valinux.co.jp \
    --cc=s-uchida@ap.jp.nec.com \
    --cc=snitzer@redhat.com \
    --cc=taka@valinux.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).