linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* md device io request split
@ 2011-11-22  9:36 "Ramon Schönborn"
  2011-11-23  2:31 ` NeilBrown
  0 siblings, 1 reply; 4+ messages in thread
From: "Ramon Schönborn" @ 2011-11-22  9:36 UTC (permalink / raw)
  To: linux-raid

Hi,

could someone help me understand why md splits io requests in 4k blocks?
iostat says:
Device:	rrqm/s	wrqm/s	r/s	w/s	rMB/s	wMB/s	avgrq-sz	avgqu-sz	await	svctm	%util				
...
dm-71	4.00	5895.00	31.00	7538.00	0.14	52.54	14.25	94.69	16041	0.13	96.00
dm-96	2.00	5883.00	18.00	7670.00	0.07	52.95	14.13	104.84	13.69	0.12	96.00
md17	0.00	0.00	48.00	13234.00	0.19	51.70	8.00	0.00	0.00	0.00	0.00

md17 is a raid1 with members "dm-71" and "dm-96". IO was generated with something like "dd if=/dev/zero bs=100k of=/dev/md17".
According to "avgrq-sz", the average size of the requests is 8 times 512b, i.e. 4k.
I used kernel 3.0.7 and verified the results with a raid5 and older kernel version (2.6.32) too.
Why do i bother about this at all?
The io requests in my case come from a virtual machine, where the requests have been merged in a virtual device. Afterwards the requests are split at md-level (vm host) and later merged again (at dm-71/dm-96). This seems to be an avoidable overhead, isn't it?

regards,
Ramon Schönborn
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2011-11-23 19:30 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-22  9:36 md device io request split "Ramon Schönborn"
2011-11-23  2:31 ` NeilBrown
2011-11-23 13:22   ` "Ramon Schönborn"
2011-11-23 19:30     ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).