linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [LSF/FS TOPIC] I/O performance isolation for shared storage
@ 2011-02-04  1:50 Chad Talbott
  2011-02-04  2:31 ` Vivek Goyal
  0 siblings, 1 reply; 8+ messages in thread
From: Chad Talbott @ 2011-02-04  1:50 UTC (permalink / raw)
  To: lsf-pc; +Cc: linux-fsdevel

I/O performance is the bottleneck in many systems, from phones to
servers. Knowing which request to schedule at any moment is crucial to
systems that support interactive latencies and high throughput.  When
you're watching a video on your desktop, you don't want it to skip
when you build a kernel.

To address this in our environment Google has now deployed the
blk-cgroup code worldwide, and I'd like to share some of our
experiences. We've made modifications for our purposes, and are in the
process of proposing those upstream:

  - Page tracking for buffered writes
  - Fairness-preserving preemption across cgroups

There is further work to do along the lines of fine-grained accounting
and isolation. For example, many file servers in a Google cluster will
do IO on behalf of hundreds, even thousands of clients. Each client
has different service requirements, and it's inefficient to map them
to (cgroup, task) pairs.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-02-15 23:15 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-02-04  1:50 [LSF/FS TOPIC] I/O performance isolation for shared storage Chad Talbott
2011-02-04  2:31 ` Vivek Goyal
2011-02-04 23:07   ` Chad Talbott
2011-02-07 18:06     ` Vivek Goyal
2011-02-07 19:40       ` Chad Talbott
2011-02-07 20:38         ` Vivek Goyal
2011-02-15 12:54     ` Jan Kara
2011-02-15 23:15       ` Chad Talbott

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).