From: bfields <bfields@fieldses.org>
To: Daire Byrne <daire@dneg.com>
Cc: Richard Weinberger <richard@nod.at>,
linux-nfs <linux-nfs@vger.kernel.org>,
luis turcitu <luis.turcitu@appsbroker.com>,
chris chilvers <chris.chilvers@appsbroker.com>,
david young <david.young@appsbroker.com>,
david <david@sigma-star.at>,
david oberhollenzer <david.oberhollenzer@sigma-star.at>
Subject: Re: Improving NFS re-export
Date: Tue, 21 Dec 2021 12:21:07 -0500 [thread overview]
Message-ID: <20211221172107.GA10448@fieldses.org> (raw)
In-Reply-To: <CAPt2mGM5jJNu_O=pjvU4UEYZ7L9pcunGedVmFP1h+J4QoMLPUg@mail.gmail.com>
On Tue, Dec 21, 2021 at 02:30:45PM +0000, Daire Byrne wrote:
> On Thu, 9 Dec 2021 at 22:03, Richard Weinberger <richard@nod.at> wrote:
> >
> > I see. That way we could get rid of file handle wrapping but loose the
> > NFS clinet inode cache on the re-exporting server, I think.
>
> As an avid user of re-exporting over the WAN, we do like to be able to
> selectively cache as much of the metadata lookups as possible
> (actimeo=3600, vfs_cache_pressure=1).
>
> I'm not sure if losing the re-export server's client inode cache would
> effect that ability?
A proxy without an inode cache wouldn't be good.
So the inode cache would have to be indexed just on (a hash of) the raw
filehandle.
> And on the subject of the "proxy" server and a server per export; if
> like us, you have 30 servers or mountpoints to re-export but you might
> only actively use 5-10 of those at any one time, so it is more
> resource efficient (CPU, RAM, fscache storage) to use a single
> re-export server for more than one mountpoint re-export.
That's useful to know, thanks.
> But in the proxy case, maybe the same thing could be achieved with a
> containerised knfsd with all the proxy servers running on the same
> server?
Yes, that's what I was thinking.
> I'm not sure if you could have shared storage and have multiple
> fs-cache/cachefilesd in containers though.
Seems like there should be a few ways to do that.
> Either way, I'm interested to see what you come up with. Always happy
> to test new variations on re-exporting.
I haven't managed to come up with a plan for making a proxy-only mode
work, though, so I'm not feeling too optimistic about that particular
idea.
--b.
next prev parent reply other threads:[~2021-12-21 17:21 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-09 21:05 Improving NFS re-export Richard Weinberger
2021-12-09 21:41 ` J. Bruce Fields
2021-12-09 22:03 ` Richard Weinberger
2021-12-21 14:30 ` Daire Byrne
2021-12-21 17:21 ` bfields [this message]
2021-12-21 21:39 ` Richard Weinberger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211221172107.GA10448@fieldses.org \
--to=bfields@fieldses.org \
--cc=chris.chilvers@appsbroker.com \
--cc=daire@dneg.com \
--cc=david.oberhollenzer@sigma-star.at \
--cc=david.young@appsbroker.com \
--cc=david@sigma-star.at \
--cc=linux-nfs@vger.kernel.org \
--cc=luis.turcitu@appsbroker.com \
--cc=richard@nod.at \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox