From: "Steve French" <smfrench@gmail.com>
To: "Peter Zijlstra" <a.p.zijlstra@chello.nl>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
linux-fsdevel <linux-fsdevel@vger.kernel.org>,
"David Howells" <dhowells@redhat.com>,
sfrench@samba.org, jaharkes@cs.cmu.edu,
"Andrew Morton" <akpm@linux-foundation.org>,
vandrove@vc.cvut.cz
Subject: Re: Networked filesystems vs backing_dev_info
Date: Sat, 27 Oct 2007 16:02:21 -0500 [thread overview]
Message-ID: <524f69650710271402g65a9ec1cqcc7bc3a964097e39@mail.gmail.com> (raw)
In-Reply-To: <1193477666.5648.61.camel@lappy>
On 10/27/07, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> Hi,
>
> I had me a little look at bdi usage in networked filesystems.
>
> NFS, CIFS, (smbfs), AFS, CODA and NCP
>
> And of those, NFS is the only one that I could find that creates
> backing_dev_info structures. The rest seems to fall back to
> default_backing_dev_info.
>
> With my recent per bdi dirty limit patches the bdi has become more
> important than it has been in the past. While falling back to the
> default_backing_dev_info isn't wrong per-se, it isn't right either.
>
> Could I implore the various maintainers to look into this issue for
> their respective filesystem. I'll try and come up with some patches to
> address this, but feel free to beat me to it.
I would like to understand more about your patches to see what bdi
values makes sense for CIFS and how to report possible congestion back
to the page manager. I had been thinking about setting bdi->ra_pages
so that we do more sensible readahead and writebehind - better
matching what is possible over the network and what the server
prefers. SMB/CIFS Servers typically allow a maximum of 50 requests
in parallel at one time from one client (although this is adjustable
for some). The CIFS client prefers to do writes 14 pages (an iovec of
56K) at a time (although many servers can efficiently handle multiple
of these 56K writes in parallel). With minor changes CIFS could
handle even larger writes (to just under 64K for Windows and just
under 128K for Samba - the current CIFS Unix Extensions allow servers
to negotiate much larger writes, but lacking a "receivepage"
equivalent Samba does not currently support larger than 128K).
Ideally, to improve large file copy utilization, I would like to see
from 3-10 writes of 56K (or larger in the future) in parallel. The
read path is harder since we only do 16K reads to Windows and Samba -
but we need to increase the number of these that are done in parallel
on the same inode. There is a large Google Summer of Code patch for
this which needs more review.
--
Thanks,
Steve
next prev parent reply other threads:[~2007-10-27 21:02 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-10-27 9:34 Networked filesystems vs backing_dev_info Peter Zijlstra
2007-10-27 15:22 ` Jan Harkes
2007-10-27 15:32 ` Peter Zijlstra
2007-10-27 21:02 ` Steve French [this message]
2007-10-27 21:30 ` Peter Zijlstra
2007-10-27 21:37 ` Peter Zijlstra
2007-10-28 7:46 ` Petr Vandrovec
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=524f69650710271402g65a9ec1cqcc7bc3a964097e39@mail.gmail.com \
--to=smfrench@gmail.com \
--cc=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=dhowells@redhat.com \
--cc=jaharkes@cs.cmu.edu \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=sfrench@samba.org \
--cc=vandrove@vc.cvut.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).