linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Trond Myklebust <trondmy@hammerspace.com>
To: "tigran.mkrtchyan@desy.de" <tigran.mkrtchyan@desy.de>,
	"linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>
Subject: Re: NFS (pNFS) and VM dirty bytes
Date: Mon, 3 Jun 2019 16:13:04 +0000	[thread overview]
Message-ID: <dc63d09f3bd397da1da82a8ebd6e6a768f12c509.camel@hammerspace.com> (raw)
In-Reply-To: <1811809323.9701664.1559574448351.JavaMail.zimbra@desy.de>

On Mon, 2019-06-03 at 17:07 +0200, Mkrtchyan, Tigran wrote:
> 
> Dear NFS fellows,
> 
> though this is not directly NFS issue, I post this question
> here as we mostly affected by NFS clients (and you have enough
> kernel connection to route it to the right people).
> 
> We have 25 new data processing nodes with 32 cores, 256 GB RAM and 25
> Gb/s NIC.
> They run CentOS 7 (but this is irrelevant, I think).
> 
> When each node runs 24 parallel write incentive (75% write, 25% read)
> workloads, we see a spike of
> IO errors on close. Client runs into timeout due to slow network or
> IO starvation on the NFS servers.
> It stumbles, disconnects, establishes a new connection and stumbled
> again...

You can adjust the pNFS timeout behaviour using the 'dataserver_timeo'
and 'dataserver_retrans' module parameters on both the files and
flexfiles pNFS driver modules.

> 
> As default values for dirty pages is
> 
> vm.dirty_background_bytes = 0
> vm.dirty_background_ratio = 10
> vm.dirty_bytes = 0
> vm.dirty_ratio = 30
> 
> the first data get sent when at least 25GB of data is accumulated.
> 
> To get the full deployment more responsive, we have reduced default
> numbers to something more reasonable:
> 
> vm.dirty_background_ratio = 0
> vm.dirty_ratio = 0
> vm.dirty_background_bytes = 67108864
> vm.dirty_bytes = 536870912
> 
> IOW, we force client to start to send data as soon as 64MB is
> written. The question is how get this
> values optimal and how make them file system/mount point specific.

The memory management system knows nothing about mount points, and the
filesystems know nothing about the memory management limits. That is by
design.

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



      reply	other threads:[~2019-06-03 16:13 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-03 15:07 NFS (pNFS) and VM dirty bytes Mkrtchyan, Tigran
2019-06-03 16:13 ` Trond Myklebust [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dc63d09f3bd397da1da82a8ebd6e6a768f12c509.camel@hammerspace.com \
    --to=trondmy@hammerspace.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=tigran.mkrtchyan@desy.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).