public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Lutz Vieweg <lvml@5t9.de>
To: Dave Chinner <david@fromorbit.com>
Cc: xfs@oss.sgi.com
Subject: Re: Does XFS support cgroup writeback limiting?
Date: Wed, 25 Nov 2015 19:28:42 +0100	[thread overview]
Message-ID: <5655FDDA.9050502@5t9.de> (raw)
In-Reply-To: <20151123232052.GI26718@dastard>

On 11/24/2015 12:20 AM, Dave Chinner wrote:
> Just make the same mods to XFS as the ext4 patch here:
>
> http://www.spinics.net/lists/kernel/msg2014816.html

I read at http://www.spinics.net/lists/kernel/msg2014819.html
about this patch:

> Journal data which is written by jbd2 worker is left alone by
> this patch and will always be written out from the root cgroup.

If the same was done for XFS, wouldn't this mean a malicious
process could still stall other processes' attempts to write
to the filesystem by performing arbitrary amounts of meta-data
modifications in a tight loop?

>> After all, this functionality is the last piece of the
>> "isolation"-puzzle that is missing from Linux to actually
>> allow fencing off virtual machines or containers from DOSing
>> each other by using up all I/O bandwidth...
>
> Yes, I know, but no-one seems to care enough about it to provide
> regression tests for it.

Well, I could give it a try, if a shell script tinkering with
control groups parameters (which requires root privileges and
could easily stall the machine) would be considered adequate for
the purpose.

I would propose a test to be performed like this:

0) Identify a block device to test on. I guess some artificially
    speed-limited DM device would be best?
    Set the speed limit to X/100 MB per second, with X configurable.

1) Start 4 "good" plus 4 "evil" subprocesses competing for
    write-bandwidth on the block device.
    Assign the 4 "good" processes to two different control groups ("g1", "g2"),
    assign the 4 "evil" processes to further two different control
    groups ("e1", "e2"), so 4 control groups in total, with 2 tasks each.

2) Create 3 different XFS filesystem instances on the block
    device, one for access by only the "good" processes,
    on for access by only the "evil" processes, one for
    shared access by at least two "good" and two "evil"
    processes.

3) Behaviour of the processes:

    "Good" processes will attempt to write a configured amount
    of data (X MB) at 20% of the speed limit of the block device, modifying
    meta-data at a moderate rate (like creating/renaming/deleting files
    every few megabytes written).
    Half of the "good" processes write to their "good-only" filesystem,
    the other half writes to the "shared access" filesystem.

    Half of the "evil" processes will attempt to write as much data
    as possible into open files in a tight endless loop.
    The other half of the "evil" processes will permanently
    modify meta-data as quickly as possible, creating/renaming/deleting
    lots of files, also in a tight endless loop.
    Half of the "evil" processes writes to the "evil-only" filesystem,
    the other half writes to the "shared access" filesystem.


4) Test 1: Configure all 4 control groups to allow for the same
    buffered write rate percentage.

    The test is successful if all "good processes" terminate successfully
    after a time not longer than it would take to write 10 times X MB to the
    rate-limited block device.

    All processes to be killed after termination of all good processes or
    some timeout. If the timeout is reached, the test is failed.


5) Test 2: Configure "e1" and "e2" to allow for "zero" buffered write rate.

    The test is successful if the "good processes" terminate successfully
    after a time not longer than it would take to write 5 times X MB to the
    rate-limited block device.

    All processes to be killed after termination of all good processes or
    some timeout. If the timeout is reached, the test is failed.

6) Cleanup: unmount test filesystems, remove rate-limited DM device, remove
    control groups.

What do you think, could this be a reasonable plan?

Regards,

Lutz Vieweg


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2015-11-25 18:28 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-23 11:05 Does XFS support cgroup writeback limiting? Lutz Vieweg
2015-11-23 20:26 ` Dave Chinner
2015-11-23 22:08   ` Lutz Vieweg
2015-11-23 23:20     ` Dave Chinner
2015-11-25 18:28       ` Lutz Vieweg [this message]
2015-11-25 21:35         ` Dave Chinner
2015-11-29 21:41           ` Lutz Vieweg
2015-11-30 23:44             ` Dave Chinner
2015-12-01  8:38             ` automatic testing of cgroup writeback limiting (was: Re: Does XFS support cgroup writeback limiting?) Martin Steigerwald
2015-12-01 16:38               ` Tejun Heo
2015-12-03  0:18                 ` automatic testing of cgroup writeback limiting Lutz Vieweg
2015-12-03 15:38                   ` Tejun Heo
2015-12-01 11:01           ` I/O 'owner' DoS probs (was Re: Does XFS support cgroup writeback limiting?) L.A. Walsh
2015-12-01 20:18             ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5655FDDA.9050502@5t9.de \
    --to=lvml@5t9.de \
    --cc=david@fromorbit.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox