xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
To: xen-devel@lists.xen.org
Subject: Re: blkback disk I/O limit patch
Date: Thu, 31 Jan 2013 09:14:15 +0400	[thread overview]
Message-ID: <CACaajQuC28hKkXNJwWQPHPvm28WdywtUyC5jiXbrWwHgb3qZkw@mail.gmail.com> (raw)
In-Reply-To: <CACaajQsTVvdoatXSTngDL-OPW4j6Oz2w4Qp+e1GY=6Y4VRWJ7w@mail.gmail.com>

Sorry forget to send patch
https://bitbucket.org/go2clouds/patches/raw/master/xen_blkback_limit/3.6.9-1.patch
Patch for kernel 3.6.9, but if that needed i can rebase it to current
git Linus tree.

2013/1/31 Vasiliy Tolstov <v.tolstov@selfip.ru>:
> Hello. For own needs i'm write simple blkback disk i/o limit patch,
> that can limit disk i/o based on iops. I need xen based iops shaper
> because of own storage architecture.
> Our storages node provide disks via scst over infiniband network.
> On xen nodes we via srp attach this disks. Each xen connects to 2
> storages in same time and multipath provide failover.
>
> Each disk contains LVM (not CLVM), for each virtual machine we create
> PV disk. And via device mapper raid1 we create disk, used for domU. In
> this case if one node failed VM works fine with one disk in raid1.
>
> All works greate, but in this setup we can't use cgroups and dm-ioband.
> Some times ago CFQ disk scheduler top working with BIO devices and
> provide control only on buttom layer. (In our case we can use CFQ only
> on srp disk, and shape i/o only for all clients on xen node).
> dm-ioband work's unstable when the some domU have massive i/o (our
> tests says that if domU have ext4 and have 20000 iops sometimes dom0
> crashed, or disk coccupted. And with dm-ioband if one storage node
> down sometimes we miss some data from disk. And dm-ioband can't
> provide on the fly control of iops.
>
> This patch tryes to solve own problems. May someone from xen team look
> at it and says how code looks? What i need to change/rewrite? May be
> sometime this can be used in main linux xen tree... (i hope).
> This patch is only for phy devices. For blktap devices i speak with
> Thanos Makatos (author of blktap3) and may be in future this
> functionality may be added to blktap3..
>
> Thank You.
>
> --
> Vasiliy Tolstov,
> Clodo.ru
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru



-- 
Vasiliy Tolstov,
Clodo.ru
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

  reply	other threads:[~2013-01-31  5:14 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-31  5:12 blkback disk I/O limit patch Vasiliy Tolstov
2013-01-31  5:14 ` Vasiliy Tolstov [this message]
2013-01-31 18:05   ` Wei Liu
2013-02-01  6:53     ` Vasiliy Tolstov
2013-02-01 14:42       ` Konrad Rzeszutek Wilk
2013-02-05 13:14         ` [PATCH 1/1] drivers/block/xen-blkback: Limit blkback i/o Vasiliy Tolstov
2013-02-05 13:17         ` blkback disk I/O limit patch Vasiliy Tolstov
2013-02-01 10:59   ` Vasiliy Tolstov
2013-02-05 15:37 ` Alex Bligh
2013-02-05 16:36   ` Vasiliy Tolstov
2013-02-05 18:01     ` Alex Bligh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CACaajQuC28hKkXNJwWQPHPvm28WdywtUyC5jiXbrWwHgb3qZkw@mail.gmail.com \
    --to=v.tolstov@selfip.ru \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).