* Re: Pull request for FS-Cache, including NFS patches
[not found] ` <20081218152616.a24c013f.akpm@linux-foundation.org>
@ 2008-12-19 0:05 ` Stephen Rothwell
2008-12-29 3:45 ` Stephen Rothwell
0 siblings, 1 reply; 21+ messages in thread
From: Stephen Rothwell @ 2008-12-19 0:05 UTC (permalink / raw)
To: dhowells
Cc: Andrew Morton, Bernd Schubert, nfsv4, hch, linux-kernel, steved,
linux-fsdevel, rwheeler, linux-next
[-- Attachment #1: Type: text/plain, Size: 310 bytes --]
Hi David,
Given the ongoing discussions around FS-Cache, I have removed it from
linux-next. Please ask me to include it again (if sensible) once some
decision has been reached about its future.
--
Cheers,
Stephen Rothwell sfr@canb.auug.org.au
http://www.canb.auug.org.au/~sfr/
[-- Attachment #2: Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2008-12-19 0:05 ` Pull request for FS-Cache, including NFS patches Stephen Rothwell
@ 2008-12-29 3:45 ` Stephen Rothwell
2008-12-29 4:01 ` Andrew Morton
` (2 more replies)
0 siblings, 3 replies; 21+ messages in thread
From: Stephen Rothwell @ 2008-12-29 3:45 UTC (permalink / raw)
To: dhowells
Cc: Andrew Morton, Bernd Schubert, nfsv4, hch, linux-kernel, steved,
linux-fsdevel, rwheeler, linux-next, Trond Myklebust
[-- Attachment #1: Type: text/plain, Size: 565 bytes --]
Hi David,
On Fri, 19 Dec 2008 11:05:39 +1100 Stephen Rothwell <sfr@canb.auug.org.au> wrote:
>
> Given the ongoing discussions around FS-Cache, I have removed it from
> linux-next. Please ask me to include it again (if sensible) once some
> decision has been reached about its future.
What was the result of discussions around FS-Cache? I ask because it
reappeared in linux-next today via the nfs tree (merged into that on Dec
24 and 25) ...
--
Cheers,
Stephen Rothwell sfr@canb.auug.org.au
http://www.canb.auug.org.au/~sfr/
[-- Attachment #2: Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2008-12-29 3:45 ` Stephen Rothwell
@ 2008-12-29 4:01 ` Andrew Morton
2008-12-29 14:30 ` Trond Myklebust
2008-12-29 15:01 ` David Howells
2008-12-29 4:07 ` Andrew Morton
2008-12-29 14:26 ` David Howells
2 siblings, 2 replies; 21+ messages in thread
From: Andrew Morton @ 2008-12-29 4:01 UTC (permalink / raw)
To: Stephen Rothwell
Cc: Bernd Schubert, nfsv4, steved, linux-kernel, dhowells, linux-next,
linux-fsdevel, rwheeler
On Mon, 29 Dec 2008 14:45:33 +1100 Stephen Rothwell <sfr@canb.auug.org.au> wrote:
> Hi David,
>
> On Fri, 19 Dec 2008 11:05:39 +1100 Stephen Rothwell <sfr@canb.auug.org.au> wrote:
> >
> > Given the ongoing discussions around FS-Cache, I have removed it from
> > linux-next. Please ask me to include it again (if sensible) once some
> > decision has been reached about its future.
>
> What was the result of discussions around FS-Cache?
There was none.
Dan Muntz's question:
Solaris has had CacheFS since ~1995, HPUX had a port of it since
~1997. I'd be interested in evidence of even a small fraction of
Solaris and/or HPUX shops using CacheFS. I am aware of customers who
thought it sounded like a good idea, but ended up ditching it for
various reasons (e.g., CacheFS just adds overhead if you almost
always hit your local mem cache).
was an very very good one.
Seems that instead of answering it, we've decided to investigate the
fate of those who do not learn from history.
> I ask because it
> reappeared in linux-next today via the nfs tree (merged into that on Dec
> 24 and 25) ...
oh.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2008-12-29 3:45 ` Stephen Rothwell
2008-12-29 4:01 ` Andrew Morton
@ 2008-12-29 4:07 ` Andrew Morton
2008-12-29 5:26 ` Stephen Rothwell
2008-12-29 15:04 ` David Howells
2008-12-29 14:26 ` David Howells
2 siblings, 2 replies; 21+ messages in thread
From: Andrew Morton @ 2008-12-29 4:07 UTC (permalink / raw)
To: Stephen Rothwell
Cc: Bernd Schubert, nfsv4, steved, linux-kernel, dhowells, linux-next,
linux-fsdevel, rwheeler
On Mon, 29 Dec 2008 14:45:33 +1100 Stephen Rothwell <sfr@canb.auug.org.au> wrote:
> I ask because it
> reappeared in linux-next today via the nfs tree (merged into that on Dec
> 24 and 25) ...
And that of course means that many many 2.6.28 patches which I am
maintaining will need significant rework to apply on top of linux-next,
and then they won't apply to mainline. Or that linux-next will not apply
on top of those patches. Mainly memory management.
Please drop the NFS tree until after -rc1.
Guys, this: http://lkml.org/lkml/2008/12/27/173
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2008-12-29 4:07 ` Andrew Morton
@ 2008-12-29 5:26 ` Stephen Rothwell
2008-12-29 15:04 ` David Howells
1 sibling, 0 replies; 21+ messages in thread
From: Stephen Rothwell @ 2008-12-29 5:26 UTC (permalink / raw)
To: Andrew Morton
Cc: Bernd Schubert, nfsv4, steved, linux-kernel, dhowells, linux-next,
linux-fsdevel, rwheeler
[-- Attachment #1.1: Type: text/plain, Size: 599 bytes --]
Hi Andrew, Trond,
On Sun, 28 Dec 2008 20:07:26 -0800 Andrew Morton <akpm@linux-foundation.org> wrote:
>
> And that of course means that many many 2.6.28 patches which I am
> maintaining will need significant rework to apply on top of linux-next,
> and then they won't apply to mainline. Or that linux-next will not apply
> on top of those patches. Mainly memory management.
>
> Please drop the NFS tree until after -rc1.
OK, it is dropped for now (including today's tree).
--
Cheers,
Stephen Rothwell sfr@canb.auug.org.au
http://www.canb.auug.org.au/~sfr/
[-- Attachment #1.2: Type: application/pgp-signature, Size: 197 bytes --]
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
NFSv4 mailing list
NFSv4@linux-nfs.org
http://linux-nfs.org/cgi-bin/mailman/listinfo/nfsv4
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2008-12-29 3:45 ` Stephen Rothwell
2008-12-29 4:01 ` Andrew Morton
2008-12-29 4:07 ` Andrew Morton
@ 2008-12-29 14:26 ` David Howells
2 siblings, 0 replies; 21+ messages in thread
From: David Howells @ 2008-12-29 14:26 UTC (permalink / raw)
To: Stephen Rothwell
Cc: Bernd Schubert, nfsv4, linux-kernel, steved, dhowells, linux-next,
linux-fsdevel, Andrew Morton, rwheeler
Stephen Rothwell <sfr@canb.auug.org.au> wrote:
> What was the result of discussions around FS-Cache? I ask because it
> reappeared in linux-next today via the nfs tree (merged into that on Dec
> 24 and 25) ...
That is the result of discussions during the kernel summit in Portland. The
discussion here is about whether Andrew agrees with adding the patches or not,
as far as I can tell. There are a number of people/companies who want them;
there is Andrew who does not.
David
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2008-12-29 4:01 ` Andrew Morton
@ 2008-12-29 14:30 ` Trond Myklebust
2008-12-29 14:54 ` Ric Wheeler
2008-12-29 23:05 ` Muntz, Daniel
2008-12-29 15:01 ` David Howells
1 sibling, 2 replies; 21+ messages in thread
From: Trond Myklebust @ 2008-12-29 14:30 UTC (permalink / raw)
To: Andrew Morton
Cc: Stephen Rothwell, Bernd Schubert, nfsv4, linux-kernel, steved,
dhowells, linux-next, linux-fsdevel, rwheeler
On Sun, 2008-12-28 at 20:01 -0800, Andrew Morton wrote:
> On Mon, 29 Dec 2008 14:45:33 +1100 Stephen Rothwell <sfr@canb.auug.org.au> wrote:
>
> > Hi David,
> >
> > On Fri, 19 Dec 2008 11:05:39 +1100 Stephen Rothwell <sfr@canb.auug.org.au> wrote:
> > >
> > > Given the ongoing discussions around FS-Cache, I have removed it from
> > > linux-next. Please ask me to include it again (if sensible) once some
> > > decision has been reached about its future.
> >
> > What was the result of discussions around FS-Cache?
>
> There was none.
>
> Dan Muntz's question:
>
> Solaris has had CacheFS since ~1995, HPUX had a port of it since
> ~1997. I'd be interested in evidence of even a small fraction of
> Solaris and/or HPUX shops using CacheFS. I am aware of customers who
> thought it sounded like a good idea, but ended up ditching it for
> various reasons (e.g., CacheFS just adds overhead if you almost
> always hit your local mem cache).
>
> was an very very good one.
>
> Seems that instead of answering it, we've decided to investigate the
> fate of those who do not learn from history.
David has given you plenty of arguments for why it helps scale the
server (including specific workloads), has given you numbers validating
his claim, and has presented claims that Red Hat has customers using
cachefs in RHEL-5.
The arguments I've seen against it, have so far been:
1. Solaris couldn't sell their implementation
2. It's too big
3. It's intrusive
Argument (1) has so far appeared to be pure FUD. In order to discuss the
lessons of history, you need to first do the work of analysing and
understanding it first. I really don't see how it is relevant to Linux
whether or not the Solaris and HPUX cachefs implementations worked out
unless you can demonstrate that that their experience shows some fatal
flaw in the arguments and numbers that David presented, and that his
customers are deluded.
If you want examples of permanent caches that clearly do help servers
scale, then look no further than the on-disk caches used in almost all
http browser implemantations. Alternatively, as David mentioned, there
are the on-disk caches used by AFS/DFS/coda.
(2) may be valid, but I have yet to see specifics for where you'd like
to see the cachefs code slimmed down. Did I miss them?
(3) was certainly true 3 years ago, when the code was first presented
for review, and so we did a review and critique then. The NFS specific
changes have improved greatly as a result, and as far as I know, the
security folks are happy too. If you're not happy with the parts that
affect the memory management code then, again, it would be useful to see
specifics that what you want changed.
If there is still controversy concerning this, then I can temporarily
remove cachefs from the nfs linux-next branch, but I'm definitely
keeping it in the linux-mm branch until someone gives me a reason for
why it shouldn't be merged in its current state.
Trond
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2008-12-29 14:30 ` Trond Myklebust
@ 2008-12-29 14:54 ` Ric Wheeler
2008-12-29 23:05 ` Muntz, Daniel
1 sibling, 0 replies; 21+ messages in thread
From: Ric Wheeler @ 2008-12-29 14:54 UTC (permalink / raw)
To: Trond Myklebust
Cc: Stephen Rothwell, Bernd Schubert, nfsv4, linux-kernel, steved,
dhowells, linux-next, linux-fsdevel, Andrew Morton
Trond Myklebust wrote:
> On Sun, 2008-12-28 at 20:01 -0800, Andrew Morton wrote:
>
>> On Mon, 29 Dec 2008 14:45:33 +1100 Stephen Rothwell <sfr@canb.auug.org.au> wrote:
>>
>>
>>> Hi David,
>>>
>>> On Fri, 19 Dec 2008 11:05:39 +1100 Stephen Rothwell <sfr@canb.auug.org.au> wrote:
>>>
>>>> Given the ongoing discussions around FS-Cache, I have removed it from
>>>> linux-next. Please ask me to include it again (if sensible) once some
>>>> decision has been reached about its future.
>>>>
>>> What was the result of discussions around FS-Cache?
>>>
>> There was none.
>>
>> Dan Muntz's question:
>>
>> Solaris has had CacheFS since ~1995, HPUX had a port of it since
>> ~1997. I'd be interested in evidence of even a small fraction of
>> Solaris and/or HPUX shops using CacheFS. I am aware of customers who
>> thought it sounded like a good idea, but ended up ditching it for
>> various reasons (e.g., CacheFS just adds overhead if you almost
>> always hit your local mem cache).
>>
>> was an very very good one.
>>
>> Seems that instead of answering it, we've decided to investigate the
>> fate of those who do not learn from history.
>>
>
> David has given you plenty of arguments for why it helps scale the
> server (including specific workloads), has given you numbers validating
> his claim, and has presented claims that Red Hat has customers using
> cachefs in RHEL-5.
> The arguments I've seen against it, have so far been:
>
> 1. Solaris couldn't sell their implementation
> 2. It's too big
> 3. It's intrusive
>
> Argument (1) has so far appeared to be pure FUD. In order to discuss the
> lessons of history, you need to first do the work of analysing and
> understanding it first. I really don't see how it is relevant to Linux
> whether or not the Solaris and HPUX cachefs implementations worked out
> unless you can demonstrate that that their experience shows some fatal
> flaw in the arguments and numbers that David presented, and that his
> customers are deluded.
> If you want examples of permanent caches that clearly do help servers
> scale, then look no further than the on-disk caches used in almost all
> http browser implemantations. Alternatively, as David mentioned, there
> are the on-disk caches used by AFS/DFS/coda.
>
I can add that our Red Hat customers who tried the cachefs preview did
find it useful for their workloads (and, by the way, also use the
Solaris cachefs on solaris boxes if I remember correctly). They have
been nagging me and others at Red Hat about getting it into supported
state for quite a while :-)
As you point out, this is all about getting more clients to be driven by
a set of NFS servers.
Regards,
Ric
> (2) may be valid, but I have yet to see specifics for where you'd like
> to see the cachefs code slimmed down. Did I miss them?
>
> (3) was certainly true 3 years ago, when the code was first presented
> for review, and so we did a review and critique then. The NFS specific
> changes have improved greatly as a result, and as far as I know, the
> security folks are happy too. If you're not happy with the parts that
> affect the memory management code then, again, it would be useful to see
> specifics that what you want changed.
>
> If there is still controversy concerning this, then I can temporarily
> remove cachefs from the nfs linux-next branch, but I'm definitely
> keeping it in the linux-mm branch until someone gives me a reason for
> why it shouldn't be merged in its current state.
>
> Trond
>
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2008-12-29 4:01 ` Andrew Morton
2008-12-29 14:30 ` Trond Myklebust
@ 2008-12-29 15:01 ` David Howells
1 sibling, 0 replies; 21+ messages in thread
From: David Howells @ 2008-12-29 15:01 UTC (permalink / raw)
To: Andrew Morton
Cc: Stephen Rothwell, Bernd Schubert, nfsv4, linux-kernel, steved,
dhowells, linux-next, linux-fsdevel, rwheeler
Andrew Morton <akpm@linux-foundation.org> wrote:
> > What was the result of discussions around FS-Cache?
>
> There was none.
I disagree with your assertion that there was no result. Various people,
beside myself, have weighed in with situations where FS-Cache is or may be
useful. You've been presented with benchmarks showing that it can make a
difference.
However, *you* are the antagonist, as strictly defined in the dictionary; we
were trying to convince *you*, so a result has to come from *you*. I feel
that you are completely against it and that we've no hope of shifting you.
> Dan Muntz's question:
>
> Solaris has had CacheFS since ~1995, HPUX had a port of it since
> ~1997. I'd be interested in evidence of even a small fraction of
> Solaris and/or HPUX shops using CacheFS. I am aware of customers who
> thought it sounded like a good idea, but ended up ditching it for
> various reasons (e.g., CacheFS just adds overhead if you almost
> always hit your local mem cache).
>
> was an very very good one.
And to a large extent irrelevant. Yes, we know caching adds overhead; I've
never tried to pretend otherwise. It's an exercise in compromise. You don't
just go and slap a cache on everything. There *are* situations in which a
cache will help. We have customers who know about them and are willing to
live with the overhead.
What I have done is to ensure that, even if caching is compiled in, then the
overhead is minimal if there is _no_ cache present. That is requirement #1 on
my list.
Assuming I understand what he said correctly, I've avoided the main issue
listed by Dan because I don't do as Solaris does and interpolate the cache
between the user and NFS. Of course, that probably buys me other issues (FS
design is an exercise in compromise too).
> Seems that instead of answering it, we've decided to investigate the
> fate of those who do not learn from history.
Sigh.
The main point is that caching _is_ useful, even with its drawbacks. Dan may
be aware of customers of Sun/HP who thought caching sounds like a good idea,
but then ended up ditching it. I can well believe it. But I am also aware of
customers of Red Hat who are actively using the caching we put in RHEL-5 and
customers who really want caching available in future RHEL and Fedora versions
for various reasons.
To sum up:
(1) Overhead is minimal if there is no cache.
(2) Benchmarks show that the cache can be effective.
(3) People are already using it and finding it useful.
(4) There are people who want it for various projects.
(5) The use of a cache does not automatically buy you an improvement in
performance: it's a matter of compromise.
(6) The performance improvement may be in the network or the servers, not the
client that is actually doing the caching.
David
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2008-12-29 4:07 ` Andrew Morton
2008-12-29 5:26 ` Stephen Rothwell
@ 2008-12-29 15:04 ` David Howells
1 sibling, 0 replies; 21+ messages in thread
From: David Howells @ 2008-12-29 15:04 UTC (permalink / raw)
To: Andrew Morton
Cc: dhowells, Stephen Rothwell, Bernd Schubert, nfsv4, hch,
linux-kernel, steved, linux-fsdevel, rwheeler, linux-next,
Trond Myklebust
Andrew Morton <akpm@linux-foundation.org> wrote:
> And that of course means that many many 2.6.28 patches which I am
> maintaining will need significant rework to apply on top of linux-next,
> and then they won't apply to mainline. Or that linux-next will not apply
> on top of those patches. Mainly memory management.
Significant rework to many many patches? The FS-Cache patches don't have all
that much impact outside of their own directories, AFS and NFS.
> Please drop the NFS tree until after -rc1.
>
> Guys, this: http://lkml.org/lkml/2008/12/27/173
Okay, that's a reasonable request.
David
^ permalink raw reply [flat|nested] 21+ messages in thread
* RE: Pull request for FS-Cache, including NFS patches
2008-12-29 14:30 ` Trond Myklebust
2008-12-29 14:54 ` Ric Wheeler
@ 2008-12-29 23:05 ` Muntz, Daniel
2008-12-30 18:44 ` Trond Myklebust
1 sibling, 1 reply; 21+ messages in thread
From: Muntz, Daniel @ 2008-12-29 23:05 UTC (permalink / raw)
To: Trond Myklebust, Andrew Morton
Cc: Stephen Rothwell, Bernd Schubert, nfsv4, linux-kernel, steved,
dhowells, linux-next, linux-fsdevel, rwheeler
Before throwing the 'FUD' acronym around, maybe you should re-read the
details. My point was that there were few users of cachefs even when
the technology had the potential for greater benefit (slower networks,
less powerful servers, smaller memory caches). Obviously cachefs can
improve performance--it's simply a function of workload and the
assumptions made about server/disk/network bandwidth. However, I would
expect the real benefits and real beneficiaries to be fewer than in the
past. HOWEVER^2 I did provide some argument(s) in favor of adding
cachefs, and look forward to extensions to support delayed write,
offline operation, and NFSv4 support with real consistency checking (as
long as I don't have to take the customer calls ;-). BTW,
animation/video shops were one group that did benefit, and I imagine
they still could today (the one I had in mind did work across Britain,
the US, and Asia and relied on cachefs for overcoming slow network
connections). Wonder if the same company is a RH customer...
All the comparisons to HTTP browser implementations are, imho, absurd.
It's fine to keep a bunch of http data around on disk because a) it's RO
data, b) correctness is not terribly important, and c) a human is
generally the consumer and can manually request non-cached data if
things look wonky. It is a trivial case of caching.
As for security, look at what MIT had to do to prevent local disk
caching from breaking the security guarantees of AFS.
Customers (deluded or otherwise) are still customers. No one is forced
to compile it into their kernel. Ship it.
-Dan
-----Original Message-----
From: Trond Myklebust [mailto:trond.myklebust@fys.uio.no]
Sent: Monday, December 29, 2008 6:31 AM
To: Andrew Morton
Cc: Stephen Rothwell; Bernd Schubert; nfsv4@linux-nfs.org;
linux-kernel@vger.kernel.org; steved@redhat.com; dhowells@redhat.com;
linux-next@vger.kernel.org; linux-fsdevel@vger.kernel.org;
rwheeler@redhat.com
Subject: Re: Pull request for FS-Cache, including NFS patches
On Sun, 2008-12-28 at 20:01 -0800, Andrew Morton wrote:
> On Mon, 29 Dec 2008 14:45:33 +1100 Stephen Rothwell
<sfr@canb.auug.org.au> wrote:
>
> > Hi David,
> >
> > On Fri, 19 Dec 2008 11:05:39 +1100 Stephen Rothwell
<sfr@canb.auug.org.au> wrote:
> > >
> > > Given the ongoing discussions around FS-Cache, I have removed it
> > > from linux-next. Please ask me to include it again (if sensible)
> > > once some decision has been reached about its future.
> >
> > What was the result of discussions around FS-Cache?
>
> There was none.
>
> Dan Muntz's question:
>
> Solaris has had CacheFS since ~1995, HPUX had a port of it since
> ~1997. I'd be interested in evidence of even a small fraction of
> Solaris and/or HPUX shops using CacheFS. I am aware of customers
who
> thought it sounded like a good idea, but ended up ditching it for
> various reasons (e.g., CacheFS just adds overhead if you almost
> always hit your local mem cache).
>
> was an very very good one.
>
> Seems that instead of answering it, we've decided to investigate the
> fate of those who do not learn from history.
David has given you plenty of arguments for why it helps scale the
server (including specific workloads), has given you numbers validating
his claim, and has presented claims that Red Hat has customers using
cachefs in RHEL-5.
The arguments I've seen against it, have so far been:
1. Solaris couldn't sell their implementation
2. It's too big
3. It's intrusive
Argument (1) has so far appeared to be pure FUD. In order to discuss the
lessons of history, you need to first do the work of analysing and
understanding it first. I really don't see how it is relevant to Linux
whether or not the Solaris and HPUX cachefs implementations worked out
unless you can demonstrate that that their experience shows some fatal
flaw in the arguments and numbers that David presented, and that his
customers are deluded.
If you want examples of permanent caches that clearly do help servers
scale, then look no further than the on-disk caches used in almost all
http browser implemantations. Alternatively, as David mentioned, there
are the on-disk caches used by AFS/DFS/coda.
(2) may be valid, but I have yet to see specifics for where you'd like
to see the cachefs code slimmed down. Did I miss them?
(3) was certainly true 3 years ago, when the code was first presented
for review, and so we did a review and critique then. The NFS specific
changes have improved greatly as a result, and as far as I know, the
security folks are happy too. If you're not happy with the parts that
affect the memory management code then, again, it would be useful to see
specifics that what you want changed.
If there is still controversy concerning this, then I can temporarily
remove cachefs from the nfs linux-next branch, but I'm definitely
keeping it in the linux-mm branch until someone gives me a reason for
why it shouldn't be merged in its current state.
Trond
_______________________________________________
NFSv4 mailing list
NFSv4@linux-nfs.org
http://linux-nfs.org/cgi-bin/mailman/listinfo/nfsv4
^ permalink raw reply [flat|nested] 21+ messages in thread
* RE: Pull request for FS-Cache, including NFS patches
2008-12-29 23:05 ` Muntz, Daniel
@ 2008-12-30 18:44 ` Trond Myklebust
2008-12-30 22:15 ` Muntz, Daniel
0 siblings, 1 reply; 21+ messages in thread
From: Trond Myklebust @ 2008-12-30 18:44 UTC (permalink / raw)
To: Muntz, Daniel
Cc: Stephen Rothwell, Bernd Schubert, nfsv4, steved, linux-kernel,
dhowells, linux-next, linux-fsdevel, Andrew Morton, rwheeler
On Mon, 2008-12-29 at 15:05 -0800, Muntz, Daniel wrote:
> Before throwing the 'FUD' acronym around, maybe you should re-read the
> details. My point was that there were few users of cachefs even when
> the technology had the potential for greater benefit (slower networks,
> less powerful servers, smaller memory caches). Obviously cachefs can
> improve performance--it's simply a function of workload and the
> assumptions made about server/disk/network bandwidth. However, I would
> expect the real benefits and real beneficiaries to be fewer than in the
> past. HOWEVER^2 I did provide some argument(s) in favor of adding
> cachefs, and look forward to extensions to support delayed write,
> offline operation, and NFSv4 support with real consistency checking (as
> long as I don't have to take the customer calls ;-). BTW,
> animation/video shops were one group that did benefit, and I imagine
> they still could today (the one I had in mind did work across Britain,
> the US, and Asia and relied on cachefs for overcoming slow network
> connections). Wonder if the same company is a RH customer...
I did read your argument. My point is that although the argument sounds
reasonable, it ignores the fact that the customer bases are completely
different. The people asking for cachefs on Linux typically run a
cluster of 2000+ clients all accessing the same read-only data from just
a handful of servers. They're primarily looking to improve the
performance and stability of the _servers_, since those are the single
point of failure of the cluster.
As far as I know, historically there has never been a market for 2000+
HP-UX, or even Solaris based clusters, and unless the HP and Sun product
plans change drastically, then simple economics dictates that nor will
there ever be such a market, whether or not they have cachefs support.
OpenSolaris is a different kettle of fish since it has cachefs, and does
run on COTS hardware, but there are other reasons why that hasn't yet
penetrated the HPC market.
> All the comparisons to HTTP browser implementations are, imho, absurd.
> It's fine to keep a bunch of http data around on disk because a) it's RO
> data, b) correctness is not terribly important, and c) a human is
> generally the consumer and can manually request non-cached data if
> things look wonky. It is a trivial case of caching.
See above. The majority of people I'm aware of that have been asking for
this are interested mainly in improving read-only workloads for data
that changes infrequently. Correctness tends to be important, but the
requirements are no different from those that apply to the page cache.
You mentioned the animation industry: they are prime example of an
industry that satisfies (a), (b), and (c). Ditto the oil and gas
exploration industry, as well as pretty much all scientific computing,
to mention only a few examples...
> As for security, look at what MIT had to do to prevent local disk
> caching from breaking the security guarantees of AFS.
See what David has added to the LSM code to provide the same guarantees
for cachefs...
Trond
^ permalink raw reply [flat|nested] 21+ messages in thread
* RE: Pull request for FS-Cache, including NFS patches
2008-12-30 18:44 ` Trond Myklebust
@ 2008-12-30 22:15 ` Muntz, Daniel
2008-12-30 22:36 ` Trond Myklebust
2008-12-31 9:49 ` Arjan van de Ven
0 siblings, 2 replies; 21+ messages in thread
From: Muntz, Daniel @ 2008-12-30 22:15 UTC (permalink / raw)
To: Trond Myklebust
Cc: Andrew Morton, Stephen Rothwell, Bernd Schubert, nfsv4,
linux-kernel, steved, dhowells, linux-next, linux-fsdevel,
rwheeler
>> As for security, look at what MIT had to do to prevent local disk
>> caching from breaking the security guarantees of AFS.
>
>See what David has added to the LSM code to provide the same guarantees
for cachefs...
>
>Trond
Unless it (at least) leverages TPM, the issues I had in mind can't
really be addressed in code. One requirement is to prevent a local root
user from accessing fs information without appropriate permissions.
This leads to unwieldly requirements such as allowing only one user on a
machine at a time, blowing away the cache on logout, validating (e.g.,
refreshing) the kernel on each boot, etc. Sure, some applications won't
care, but you're also potentially opening holes that users may not
consider.
-Dan
-----Original Message-----
From: Trond Myklebust [mailto:trond.myklebust@fys.uio.no]
Sent: Tuesday, December 30, 2008 10:45 AM
To: Muntz, Daniel
Cc: Andrew Morton; Stephen Rothwell; Bernd Schubert;
nfsv4@linux-nfs.org; linux-kernel@vger.kernel.org; steved@redhat.com;
dhowells@redhat.com; linux-next@vger.kernel.org;
linux-fsdevel@vger.kernel.org; rwheeler@redhat.com
Subject: RE: Pull request for FS-Cache, including NFS patches
On Mon, 2008-12-29 at 15:05 -0800, Muntz, Daniel wrote:
> Before throwing the 'FUD' acronym around, maybe you should re-read the
> details. My point was that there were few users of cachefs even when
> the technology had the potential for greater benefit (slower networks,
> less powerful servers, smaller memory caches). Obviously cachefs can
> improve performance--it's simply a function of workload and the
> assumptions made about server/disk/network bandwidth. However, I
> would expect the real benefits and real beneficiaries to be fewer than
> in the past. HOWEVER^2 I did provide some argument(s) in favor of
> adding cachefs, and look forward to extensions to support delayed
> write, offline operation, and NFSv4 support with real consistency
> checking (as long as I don't have to take the customer calls ;-).
> BTW, animation/video shops were one group that did benefit, and I
> imagine they still could today (the one I had in mind did work across
> Britain, the US, and Asia and relied on cachefs for overcoming slow
> network connections). Wonder if the same company is a RH customer...
I did read your argument. My point is that although the argument sounds
reasonable, it ignores the fact that the customer bases are completely
different. The people asking for cachefs on Linux typically run a
cluster of 2000+ clients all accessing the same read-only data from just
a handful of servers. They're primarily looking to improve the
performance and stability of the _servers_, since those are the single
point of failure of the cluster.
As far as I know, historically there has never been a market for 2000+
HP-UX, or even Solaris based clusters, and unless the HP and Sun product
plans change drastically, then simple economics dictates that nor will
there ever be such a market, whether or not they have cachefs support.
OpenSolaris is a different kettle of fish since it has cachefs, and does
run on COTS hardware, but there are other reasons why that hasn't yet
penetrated the HPC market.
> All the comparisons to HTTP browser implementations are, imho, absurd.
> It's fine to keep a bunch of http data around on disk because a) it's
> RO data, b) correctness is not terribly important, and c) a human is
> generally the consumer and can manually request non-cached data if
> things look wonky. It is a trivial case of caching.
See above. The majority of people I'm aware of that have been asking for
this are interested mainly in improving read-only workloads for data
that changes infrequently. Correctness tends to be important, but the
requirements are no different from those that apply to the page cache.
You mentioned the animation industry: they are prime example of an
industry that satisfies (a), (b), and (c). Ditto the oil and gas
exploration industry, as well as pretty much all scientific computing,
to mention only a few examples...
> As for security, look at what MIT had to do to prevent local disk
> caching from breaking the security guarantees of AFS.
See what David has added to the LSM code to provide the same guarantees
for cachefs...
Trond
^ permalink raw reply [flat|nested] 21+ messages in thread
* RE: Pull request for FS-Cache, including NFS patches
2008-12-30 22:15 ` Muntz, Daniel
@ 2008-12-30 22:36 ` Trond Myklebust
2008-12-30 23:00 ` Muntz, Daniel
2008-12-31 9:49 ` Arjan van de Ven
1 sibling, 1 reply; 21+ messages in thread
From: Trond Myklebust @ 2008-12-30 22:36 UTC (permalink / raw)
To: Muntz, Daniel
Cc: Stephen Rothwell, Bernd Schubert, nfsv4, steved, linux-kernel,
dhowells, linux-next, linux-fsdevel, Andrew Morton, rwheeler
On Tue, 2008-12-30 at 14:15 -0800, Muntz, Daniel wrote:
> >> As for security, look at what MIT had to do to prevent local disk
> >> caching from breaking the security guarantees of AFS.
> >
> >See what David has added to the LSM code to provide the same guarantees
> for cachefs...
> >
> >Trond
>
> Unless it (at least) leverages TPM, the issues I had in mind can't
> really be addressed in code. One requirement is to prevent a local root
> user from accessing fs information without appropriate permissions.
> This leads to unwieldly requirements such as allowing only one user on a
> machine at a time, blowing away the cache on logout, validating (e.g.,
> refreshing) the kernel on each boot, etc. Sure, some applications won't
> care, but you're also potentially opening holes that users may not
> consider.
You can't prevent a local root user from accessing cached data: that's
true with or without cachefs. root can typically access the data
using /dev/kmem, swap, intercepting tty traffic, spoofing user creds,...
If root can't be trusted, then find another machine.
The worry is rather that privileged daemons may be tricked into
revealing said data to unprivileged users, or that unprivileged users
may attempt to read data from files to which they have no rights using
the cachefs itself. That is a problem that is addressable by means of
LSM, and is what David has attempted to solve.
Trond
^ permalink raw reply [flat|nested] 21+ messages in thread
* RE: Pull request for FS-Cache, including NFS patches
2008-12-30 22:36 ` Trond Myklebust
@ 2008-12-30 23:00 ` Muntz, Daniel
2008-12-30 23:17 ` Trond Myklebust
0 siblings, 1 reply; 21+ messages in thread
From: Muntz, Daniel @ 2008-12-30 23:00 UTC (permalink / raw)
To: Trond Myklebust
Cc: Stephen Rothwell, Bernd Schubert, nfsv4, steved, linux-kernel,
dhowells, linux-next, linux-fsdevel, Andrew Morton, rwheeler
Yes, and if you have a single user on the machine at a time (with cache
flushed inbetween, kernel refreshed), root can read /dev/kmem, swap,
intercept traffic and read cachefs data to its heart's content--hence,
those requirements.
-Dan
-----Original Message-----
From: Trond Myklebust [mailto:trond.myklebust@fys.uio.no]
Sent: Tuesday, December 30, 2008 2:36 PM
To: Muntz, Daniel
Cc: Andrew Morton; Stephen Rothwell; Bernd Schubert;
nfsv4@linux-nfs.org; linux-kernel@vger.kernel.org; steved@redhat.com;
dhowells@redhat.com; linux-next@vger.kernel.org;
linux-fsdevel@vger.kernel.org; rwheeler@redhat.com
Subject: RE: Pull request for FS-Cache, including NFS patches
On Tue, 2008-12-30 at 14:15 -0800, Muntz, Daniel wrote:
> >> As for security, look at what MIT had to do to prevent local disk
> >> caching from breaking the security guarantees of AFS.
> >
> >See what David has added to the LSM code to provide the same
> >guarantees
> for cachefs...
> >
> >Trond
>
> Unless it (at least) leverages TPM, the issues I had in mind can't
> really be addressed in code. One requirement is to prevent a local
> root user from accessing fs information without appropriate
permissions.
> This leads to unwieldly requirements such as allowing only one user on
> a machine at a time, blowing away the cache on logout, validating
> (e.g.,
> refreshing) the kernel on each boot, etc. Sure, some applications
> won't care, but you're also potentially opening holes that users may
> not consider.
You can't prevent a local root user from accessing cached data: that's
true with or without cachefs. root can typically access the data using
/dev/kmem, swap, intercepting tty traffic, spoofing user creds,...
If root can't be trusted, then find another machine.
The worry is rather that privileged daemons may be tricked into
revealing said data to unprivileged users, or that unprivileged users
may attempt to read data from files to which they have no rights using
the cachefs itself. That is a problem that is addressable by means of
LSM, and is what David has attempted to solve.
Trond
^ permalink raw reply [flat|nested] 21+ messages in thread
* RE: Pull request for FS-Cache, including NFS patches
2008-12-30 23:00 ` Muntz, Daniel
@ 2008-12-30 23:17 ` Trond Myklebust
2008-12-31 11:15 ` David Howells
2009-01-01 4:11 ` Muntz, Daniel
0 siblings, 2 replies; 21+ messages in thread
From: Trond Myklebust @ 2008-12-30 23:17 UTC (permalink / raw)
To: Muntz, Daniel
Cc: Stephen Rothwell, Bernd Schubert, nfsv4, steved, linux-kernel,
dhowells, linux-next, linux-fsdevel, Andrew Morton, rwheeler
On Tue, 2008-12-30 at 15:00 -0800, Muntz, Daniel wrote:
> Yes, and if you have a single user on the machine at a time (with cache
> flushed inbetween, kernel refreshed), root can read /dev/kmem, swap,
> intercept traffic and read cachefs data to its heart's content--hence,
> those requirements.
Unless you _are_ root and can check every executable, after presumably
rebooting into your own trusted kernel, then those requirements won't
mean squat. If you're that paranoid, then you will presumably also be
using a cryptfs-encrypted partition for cachefs, which you unmount when
you're not logged in.
That said, most cluster environments will tend to put most of their
security resources into keeping untrusted users out altogether. The
client nodes tend to be a homogeneous lot with presumably only a trusted
few sysadmins...
Trond
> -----Original Message-----
> From: Trond Myklebust [mailto:trond.myklebust@fys.uio.no]
> Sent: Tuesday, December 30, 2008 2:36 PM
> To: Muntz, Daniel
> Cc: Andrew Morton; Stephen Rothwell; Bernd Schubert;
> nfsv4@linux-nfs.org; linux-kernel@vger.kernel.org; steved@redhat.com;
> dhowells@redhat.com; linux-next@vger.kernel.org;
> linux-fsdevel@vger.kernel.org; rwheeler@redhat.com
> Subject: RE: Pull request for FS-Cache, including NFS patches
>
> On Tue, 2008-12-30 at 14:15 -0800, Muntz, Daniel wrote:
> > >> As for security, look at what MIT had to do to prevent local disk
> > >> caching from breaking the security guarantees of AFS.
> > >
> > >See what David has added to the LSM code to provide the same
> > >guarantees
> > for cachefs...
> > >
> > >Trond
> >
> > Unless it (at least) leverages TPM, the issues I had in mind can't
> > really be addressed in code. One requirement is to prevent a local
> > root user from accessing fs information without appropriate
> permissions.
> > This leads to unwieldly requirements such as allowing only one user on
>
> > a machine at a time, blowing away the cache on logout, validating
> > (e.g.,
> > refreshing) the kernel on each boot, etc. Sure, some applications
> > won't care, but you're also potentially opening holes that users may
> > not consider.
>
> You can't prevent a local root user from accessing cached data: that's
> true with or without cachefs. root can typically access the data using
> /dev/kmem, swap, intercepting tty traffic, spoofing user creds,...
> If root can't be trusted, then find another machine.
>
> The worry is rather that privileged daemons may be tricked into
> revealing said data to unprivileged users, or that unprivileged users
> may attempt to read data from files to which they have no rights using
> the cachefs itself. That is a problem that is addressable by means of
> LSM, and is what David has attempted to solve.
>
> Trond
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2008-12-30 22:15 ` Muntz, Daniel
2008-12-30 22:36 ` Trond Myklebust
@ 2008-12-31 9:49 ` Arjan van de Ven
1 sibling, 0 replies; 21+ messages in thread
From: Arjan van de Ven @ 2008-12-31 9:49 UTC (permalink / raw)
To: Muntz, Daniel
Cc: Trond Myklebust, Andrew Morton, Stephen Rothwell, Bernd Schubert,
nfsv4, linux-kernel, steved, dhowells, linux-next, linux-fsdevel,
rwheeler
On Tue, 30 Dec 2008 14:15:42 -0800
"Muntz, Daniel" <Dan.Muntz@netapp.com> wrote:
> >> As for security, look at what MIT had to do to prevent local disk
> >> caching from breaking the security guarantees of AFS.
> >
> >See what David has added to the LSM code to provide the same
> >guarantees
> for cachefs...
> >
> >Trond
>
> Unless it (at least) leverages TPM, the issues I had in mind can't
> really be addressed in code. One requirement is to prevent a local
> root user from accessing fs information without appropriate
> permissions.
we're talking about NFS here (but also local CDs and potentially CIFS
etc). The level of security you're talking about is going to be the
same before or after cachefs.... very little against local root.
Frankly, any networking filesystem just trusts that the connection is
authenticated... eg there is SOMEONE on the machine who has the right
credentials.
Cachefs doesn't change that; it still validates with the server before
giving userspace the data.
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2008-12-30 23:17 ` Trond Myklebust
@ 2008-12-31 11:15 ` David Howells
2009-01-01 4:11 ` Muntz, Daniel
1 sibling, 0 replies; 21+ messages in thread
From: David Howells @ 2008-12-31 11:15 UTC (permalink / raw)
To: Trond Myklebust
Cc: dhowells, Muntz, Daniel, Andrew Morton, Stephen Rothwell,
Bernd Schubert, nfsv4, linux-kernel, steved, linux-next,
linux-fsdevel, rwheeler
Trond Myklebust <trond.myklebust@fys.uio.no> wrote:
> Unless you _are_ root and can check every executable, after presumably
> rebooting into your own trusted kernel, then those requirements won't
> mean squat. If you're that paranoid, then you will presumably also be
> using a cryptfs-encrypted partition for cachefs, which you unmount when
> you're not logged in.
Actually... Cachefiles could fairly trivially add encryption. It would have
to be simple encryption but you wouldn't have to store any keys locally.
Currently cachefiles _copies_ data between the backingfs and the netfs pages
because the direct-IO code is only usable to/from userspace. Rather than
copying, encrypt/decrypt could be called.
A key could be constructed at the point a cache file is looked up. It could
be constructed from the coherency data. In the case of NFS that would be
mtime, ctime, isize and change_attr. The coherency data would be encrypted
with this key and then stored on disk, as would the contents of the file.
It might be possible to chuck the cache key (NFS fh) into the encryption key
too and also encrypt the cache key before it is turned into a filename, though
we'd have to be careful to avoid collisions if each filename is encrypted with
a different key.
We'd probably have to be careful about the coherency data decrypting with a
different key showing up as the wrong but valid thing.
The nice thing about this is that the key need not be retained locally since
it's entirely constructed from data fetched from the netfs.
David
^ permalink raw reply [flat|nested] 21+ messages in thread
* RE: Pull request for FS-Cache, including NFS patches
2008-12-30 23:17 ` Trond Myklebust
2008-12-31 11:15 ` David Howells
@ 2009-01-01 4:11 ` Muntz, Daniel
2009-01-01 8:09 ` Arjan van de Ven
1 sibling, 1 reply; 21+ messages in thread
From: Muntz, Daniel @ 2009-01-01 4:11 UTC (permalink / raw)
To: Trond Myklebust
Cc: Andrew Morton, Stephen Rothwell, Bernd Schubert, nfsv4,
linux-kernel, steved, dhowells, linux-next, linux-fsdevel,
rwheeler
Sure, trusted kernel and trusted executables, but it's easier than it
sounds. If you start with a "clean" system, you don't need to verify
excutables _if_ they're coming from the secured file server (by
induction: if you started out secure, the executables on the file server
will remain secure). You simply can't trust the local disk from one
user to the next. Following the protocol, a student can log into a
machine, su to do their OS homework, but not compromise the security of
the distributed file system.
If I can su while another user is logged in, or the kernel/cmds are not
validated between users, cryptfs isn't safe either.
If you're following the protocol, it doesn't even matter if a bad guy
("untrusted user"?) gets root on the client--they still can't gain
inappropriate access to the file server. OTOH, if my security plan is
simply to not allow root access to untrusted users, history says I'm
going to lose.
-Dan
-----Original Message-----
From: Trond Myklebust [mailto:trond.myklebust@fys.uio.no]
Sent: Tuesday, December 30, 2008 3:18 PM
To: Muntz, Daniel
Cc: Andrew Morton; Stephen Rothwell; Bernd Schubert;
nfsv4@linux-nfs.org; linux-kernel@vger.kernel.org; steved@redhat.com;
dhowells@redhat.com; linux-next@vger.kernel.org;
linux-fsdevel@vger.kernel.org; rwheeler@redhat.com
Subject: RE: Pull request for FS-Cache, including NFS patches
On Tue, 2008-12-30 at 15:00 -0800, Muntz, Daniel wrote:
> Yes, and if you have a single user on the machine at a time (with
> cache flushed inbetween, kernel refreshed), root can read /dev/kmem,
> swap, intercept traffic and read cachefs data to its heart's
> content--hence, those requirements.
Unless you _are_ root and can check every executable, after presumably
rebooting into your own trusted kernel, then those requirements won't
mean squat. If you're that paranoid, then you will presumably also be
using a cryptfs-encrypted partition for cachefs, which you unmount when
you're not logged in.
That said, most cluster environments will tend to put most of their
security resources into keeping untrusted users out altogether. The
client nodes tend to be a homogeneous lot with presumably only a trusted
few sysadmins...
Trond
> -----Original Message-----
> From: Trond Myklebust [mailto:trond.myklebust@fys.uio.no]
> Sent: Tuesday, December 30, 2008 2:36 PM
> To: Muntz, Daniel
> Cc: Andrew Morton; Stephen Rothwell; Bernd Schubert;
> nfsv4@linux-nfs.org; linux-kernel@vger.kernel.org; steved@redhat.com;
> dhowells@redhat.com; linux-next@vger.kernel.org;
> linux-fsdevel@vger.kernel.org; rwheeler@redhat.com
> Subject: RE: Pull request for FS-Cache, including NFS patches
>
> On Tue, 2008-12-30 at 14:15 -0800, Muntz, Daniel wrote:
> > >> As for security, look at what MIT had to do to prevent local disk
> > >> caching from breaking the security guarantees of AFS.
> > >
> > >See what David has added to the LSM code to provide the same
> > >guarantees
> > for cachefs...
> > >
> > >Trond
> >
> > Unless it (at least) leverages TPM, the issues I had in mind can't
> > really be addressed in code. One requirement is to prevent a local
> > root user from accessing fs information without appropriate
> permissions.
> > This leads to unwieldly requirements such as allowing only one user
> > on
>
> > a machine at a time, blowing away the cache on logout, validating
> > (e.g.,
> > refreshing) the kernel on each boot, etc. Sure, some applications
> > won't care, but you're also potentially opening holes that users may
> > not consider.
>
> You can't prevent a local root user from accessing cached data: that's
> true with or without cachefs. root can typically access the data using
> /dev/kmem, swap, intercepting tty traffic, spoofing user creds,...
> If root can't be trusted, then find another machine.
>
> The worry is rather that privileged daemons may be tricked into
> revealing said data to unprivileged users, or that unprivileged users
> may attempt to read data from files to which they have no rights using
> the cachefs itself. That is a problem that is addressable by means of
> LSM, and is what David has attempted to solve.
>
> Trond
>
> --
> To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2009-01-01 4:11 ` Muntz, Daniel
@ 2009-01-01 8:09 ` Arjan van de Ven
2009-01-01 18:40 ` Kyle Moffett
0 siblings, 1 reply; 21+ messages in thread
From: Arjan van de Ven @ 2009-01-01 8:09 UTC (permalink / raw)
To: Muntz, Daniel
Cc: Trond Myklebust, Andrew Morton, Stephen Rothwell, Bernd Schubert,
nfsv4, linux-kernel, steved, dhowells, linux-next, linux-fsdevel,
rwheeler
On Wed, 31 Dec 2008 20:11:13 -0800
"Muntz, Daniel" <Dan.Muntz@netapp.com> wrote:
please don't top post.
> Sure, trusted kernel and trusted executables, but it's easier than it
> sounds. If you start with a "clean" system, you don't need to verify
> excutables _if_ they're coming from the secured file server (by
> induction: if you started out secure, the executables on the file
> server will remain secure). You simply can't trust the local disk
> from one user to the next. Following the protocol, a student can log
> into a machine, su to do their OS homework, but not compromise the
> security of the distributed file system.
>
> If I can su while another user is logged in, or the kernel/cmds are
> not validated between users, cryptfs isn't safe either.
>
> If you're following the protocol, it doesn't even matter if a bad guy
> ("untrusted user"?) gets root on the client--they still can't gain
> inappropriate access to the file server. OTOH, if my security plan is
> simply to not allow root access to untrusted users, history says I'm
> going to lose.
if you have a user, history says you're going to lose.
you can make your system as secure as you want, with physical access
all bets are off.
keyboard sniffer.. easy.
special dimms that mirror data... not even all THAT hard, just takes a
bit of cash.
running the user in a VM without him noticing.. not too hard either.
etc.
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Pull request for FS-Cache, including NFS patches
2009-01-01 8:09 ` Arjan van de Ven
@ 2009-01-01 18:40 ` Kyle Moffett
0 siblings, 0 replies; 21+ messages in thread
From: Kyle Moffett @ 2009-01-01 18:40 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Muntz, Daniel, Trond Myklebust, Andrew Morton, Stephen Rothwell,
Bernd Schubert, nfsv4, linux-kernel, steved, dhowells, linux-next,
linux-fsdevel, rwheeler
On Thu, Jan 1, 2009 at 3:09 AM, Arjan van de Ven <arjan@infradead.org> wrote:
> On Wed, 31 Dec 2008 20:11:13 -0800 "Muntz, Daniel" <Dan.Muntz@netapp.com> wrote:
>> If you're following the protocol, it doesn't even matter if a bad guy
>> ("untrusted user"?) gets root on the client--they still can't gain
>> inappropriate access to the file server. OTOH, if my security plan is
>> simply to not allow root access to untrusted users, history says I'm
>> going to lose.
>
> if you have a user, history says you're going to lose.
>
> you can make your system as secure as you want, with physical access
> all bets are off.
Yeah... this is precisely the reason that the security-test-plan and
system-design-document for any really security sensitive system starts
with:
[ ] The system is in a locked rack
[ ] The rack is in a locked server room with detailed access logs
[ ] The server room is in a locked and secured building with 24-hour
camera surveillance and armed guards
I've spent a little time looking into the security guarantees provided
by DAC and by the FS-Cache LSM hooks, and it is possible to reasonably
guarantee that no *REMOTE* user will be able to compromise the
contents of the cache using a combination of DAC (file permissions,
etc) and MAC (SELinux, etc) controls. As previously mentioned, local
users (with physical hardware access) are an entirely different story.
As far as performance considerations for the merge... FS-cache on
flash-based storage also has very different performance tradeoffs from
traditional rotating media. Specifically I have some sample 32GB
SATA-based flash media here with ~230Mbyte/sec sustained read and
~200Mbyte/sec sustained write and with a 75usec read latency. It
doesn't take much link latency at all to completely dwarf that kind of
access time.
Cheers,
Kyle Moffett
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2009-01-01 18:40 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <8930.1229560221@redhat.com>
[not found] ` <20081218142420.GA16728@infradead.org>
[not found] ` <20081218123601.11810b7f.akpm@linux-foundation.org>
[not found] ` <200812190007.34581.bernd.schubert@fastmail.fm>
[not found] ` <20081218152616.a24c013f.akpm@linux-foundation.org>
2008-12-19 0:05 ` Pull request for FS-Cache, including NFS patches Stephen Rothwell
2008-12-29 3:45 ` Stephen Rothwell
2008-12-29 4:01 ` Andrew Morton
2008-12-29 14:30 ` Trond Myklebust
2008-12-29 14:54 ` Ric Wheeler
2008-12-29 23:05 ` Muntz, Daniel
2008-12-30 18:44 ` Trond Myklebust
2008-12-30 22:15 ` Muntz, Daniel
2008-12-30 22:36 ` Trond Myklebust
2008-12-30 23:00 ` Muntz, Daniel
2008-12-30 23:17 ` Trond Myklebust
2008-12-31 11:15 ` David Howells
2009-01-01 4:11 ` Muntz, Daniel
2009-01-01 8:09 ` Arjan van de Ven
2009-01-01 18:40 ` Kyle Moffett
2008-12-31 9:49 ` Arjan van de Ven
2008-12-29 15:01 ` David Howells
2008-12-29 4:07 ` Andrew Morton
2008-12-29 5:26 ` Stephen Rothwell
2008-12-29 15:04 ` David Howells
2008-12-29 14:26 ` David Howells
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).