public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Lutz Vieweg <lvml@5t9.de>
To: Tejun Heo <tj@kernel.org>, Martin Steigerwald <martin@lichtvoll.de>
Cc: linux-fsdevel@vger.kernel.org, xfs@oss.sgi.com
Subject: Re: automatic testing of cgroup writeback limiting
Date: Thu, 03 Dec 2015 01:18:48 +0100	[thread overview]
Message-ID: <565F8A68.9040401@5t9.de> (raw)
In-Reply-To: <20151201163815.GB12922@mtj.duckdns.org>

On 12/01/2015 05:38 PM, Tejun Heo wrote:
> As opposed to pages.  cgroup ownership is tracked per inode, not per
> page, so if multiple cgroups write to the same inode at the same time,
> some IOs will be incorrectly attributed.

I can't think of use cases where this could become a problem.
If more than one user/container/VM is allowed to write to the
same file at any one time, isolation is probably absent anyway ;-)

> cgroup ownership is per-inode.  IO throttling is per-device, so as
> long as multiple filesystems map to the same device, they fall under
> the same limit.

Good, that's why I assumed it useful to include a scenario with more
than one filesystem on the same device into the test scenario, just
to know whether there are unexpected issues if more than one filesystem
utilizes the same underlying device.

>>>> Metadata IO not throttled - it is owned by the filesystem and hence
>>>> root cgroup.
>>>
>>> Ouch. That kind of defeats the purpose of limiting evil processes'
>>> ability to DOS other processes.
>
> cgroup isn't a security mechanism and has to make active tradeoffs
> between isolation and overhead.  It doesn't provide protection against
> malicious users and in general it's a pretty bad idea to depend on
> cgroup for protection against hostile entities.

I wrote of "evil" processes for simplicity, but 99 out of 100 times
it's not intentional "evilness" that makes a process exhaust I/O
bandwidth of some device shared with other users/containers/VMs, it's
usually just bugs, inconsiderate programming or inappropriate use
that makes one process write like crazy, making other
users/containers/VMs suffer.

Whereever strict service level guarantees are relevant, and
applications require writing to storage, you currently cannot
consolidate two or more applications onto the same physical host,
even if they run under separate users/containers/VMs.

I understand there is no short or medium term solution that
would allow to isolate processes writing to the same filesytem
(because of the meta data writing), but is it correct to say
that at least VMs, which do not allow the virtual guest to
cause extensive meta data writes on the physical host, only
writes into pre-allocated image files, can be safely isolated
by the new "buffered write accounting"?

If so, we'd have stay away from user or container based isolation
of independently SLA'd applications, but could at least resort to VMs
using image files on a shared filesystem.

Regards,

Lutz Vieweg

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2015-12-03  0:18 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-23 11:05 Does XFS support cgroup writeback limiting? Lutz Vieweg
2015-11-23 20:26 ` Dave Chinner
2015-11-23 22:08   ` Lutz Vieweg
2015-11-23 23:20     ` Dave Chinner
2015-11-25 18:28       ` Lutz Vieweg
2015-11-25 21:35         ` Dave Chinner
2015-11-29 21:41           ` Lutz Vieweg
2015-11-30 23:44             ` Dave Chinner
2015-12-01  8:38             ` automatic testing of cgroup writeback limiting (was: Re: Does XFS support cgroup writeback limiting?) Martin Steigerwald
2015-12-01 16:38               ` Tejun Heo
2015-12-03  0:18                 ` Lutz Vieweg [this message]
2015-12-03 15:38                   ` automatic testing of cgroup writeback limiting Tejun Heo
2015-12-01 11:01           ` I/O 'owner' DoS probs (was Re: Does XFS support cgroup writeback limiting?) L.A. Walsh
2015-12-01 20:18             ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=565F8A68.9040401@5t9.de \
    --to=lvml@5t9.de \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=martin@lichtvoll.de \
    --cc=tj@kernel.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox