linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: Chad Talbott <ctalbott@google.com>
Cc: lsf-pc@lists.linuxfoundation.org, linux-fsdevel@vger.kernel.org
Subject: Re: [LSF/FS TOPIC] I/O performance isolation for shared storage
Date: Mon, 7 Feb 2011 15:38:01 -0500	[thread overview]
Message-ID: <20110207203801.GL7437@redhat.com> (raw)
In-Reply-To: <AANLkTi=HPdvU0MpaE-CbDNvOegDROHMXpW9Bh8gewc1r@mail.gmail.com>

On Mon, Feb 07, 2011 at 11:40:26AM -0800, Chad Talbott wrote:
> On Mon, Feb 7, 2011 at 10:06 AM, Vivek Goyal <vgoyal@redhat.com> wrote:
> > On Fri, Feb 04, 2011 at 03:07:15PM -0800, Chad Talbott wrote:
> >> I'd like to hear more about this.
> >
> > If a group dispatches some IO and then it is empty, then it will be
> > deleted from service tree and when new IO comes in, it will be put
> > at the end of service tree. That way all the groups become more of
> > round robin and there is no service differentiation.
> >
> > I was thiking that when a group gets backlogged instead of putting
> > him at the end of service tree come up with a new mechanism of
> > where they are put at certain offset from the st->min_vdisktime. This
> > offset is more for high prio group and less for low prio group. That
> > way even if a group gets deleted and comes back again with more IO
> > there is a chance it gets schedled ahead of already queued low prio
> > group and we could see some service differentiation even with idling
> > disabled.
> 
> This is interesting.  I think Nauman may have come up with a different
> method to address similar concerns.  In his method, we remember a
> group's vdisktime even when they are removed from the service tree.
> This would lead fairness over too long of a time window implemented by
> itself.  Only when the disk becomes idle, we "forget" everyone's
> vdisktime.  We should be sending that patch out Real Soon Now, along
> with the rest.

I have thought about it in the past. I think there are still few
concerns there.

- How to determine which group's vdisktime is still valid and how to
  invalidate all the past vdisktimes.

- When idling is disabled, most likely groups will dispatch bunch of
  requests and go away. So slice used might be just 1 jiffy or even
  less. In that case all the groups then have same vdisktime at
  expiry and not service differentiation.

- Even if we reuse the previous disktime, most likely it is past
  st->min_vdisktime, which is an monotonically increasing number. How
  is that handled.

Thanks
Vivek

  reply	other threads:[~2011-02-07 20:38 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-04  1:50 [LSF/FS TOPIC] I/O performance isolation for shared storage Chad Talbott
2011-02-04  2:31 ` Vivek Goyal
2011-02-04 23:07   ` Chad Talbott
2011-02-07 18:06     ` Vivek Goyal
2011-02-07 19:40       ` Chad Talbott
2011-02-07 20:38         ` Vivek Goyal [this message]
2011-02-15 12:54     ` Jan Kara
2011-02-15 23:15       ` Chad Talbott

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110207203801.GL7437@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=ctalbott@google.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=lsf-pc@lists.linuxfoundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).