linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Sébastien Wacquiez" <sw@enix.org>
To: Mel Gorman <mgorman@suse.de>, Linux-MM <linux-mm@kvack.org>
Cc: Rik van Riel <riel@redhat.com>, Hugh Dickins <hughd@google.com>,
	Minchan Kim <minchan@kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Andi Kleen <andi@firstfloor.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 0/3] TLB flush multiple pages per IPI v4
Date: Mon, 31 Aug 2015 18:20:44 +0200	[thread overview]
Message-ID: <55E47EDC.6090205@enix.org> (raw)
In-Reply-To: <1429983942-4308-1-git-send-email-mgorman@suse.de>

On 04/25/2015 07:45 PM, Mel Gorman wrote:

> The performance impact is documented in the changelogs but in the optimistic
> case on a 4-socket machine the full series reduces interrupts from 900K
> interrupts/second to 60K interrupts/second.


Hello to the list,


this patch have a huge (positive) performance impact on my setup.

In the goal of building the best ever CDN, I run varnish web cache over
very big boxes (dual xeon 12 cores, 256 Gb Ram, 24 SSd, 2*40G ethernet).

Without going into varnish internal, it help to know that varnish have
multiple storage backend (memory, file, etc), and that the file backend,
(the one you use when you have caches drives), don't use read/write
syscall but mmap.

The raw performances of this server are very good : when using varnish
with memory storage only, it push 80Gbps of network traffic easily. When
reading/writing from/to the drives, you get 10GB/s of data. And you can
do both at the same time without performance loss.

Anyway, without this patch, using file storage backend and after warmup,
the performance of the server was limited to a frustrating 14 Gbps. At
start, varnish read from the http backend at ~ 30 Gbps, cache the data
in his huge mmap, the system write it to the disk, stream it to the
client, so everything looks ok. But instead of becoming quicker when the
hitrate goes up (as we alread have data in the cache), it became slower
and slower, to finally freeze for like 4-5 seconds every 10 sec or so.

After analysis, I found out the bottleneck is the system's capacity
to find free memory. If I get it correctly, when you read a "swapped
out" page of a mmaped file, the kernel have to find some free memory to
put the data it'll read from the drive. In my case, the disk are quick
enough to handle the change almost in real time, so I've a lot of
potential free memory (ie Inactive(file)). Really freeing this memory
(either in direct or hard reclaim) is done relatively slowly, ie, after
some tuning to avoid any direct reclaim (which was causing the freeze),
I ended up having 2 kswapd (it's a bi-socket numa node) process eating
100% of cpu for ~ 14 Gbps of traffic (or ~1.5 Millions reclaims/s)

After a chat with Rik van Riel and Mel Gorman, they suggest me to try
this patch, and the limitation immediately jumped at 33 Gbps, which was
in fact my upstream capacity, after a while I was able to achieve
60 Gbps without experiencing any issue.
Even the freezing part, happening in direct reclaim mode, is a lot
smoother ; on my test rig it sufficiently quick to not be seen as
unavailability by my supervision (which wasn't the case before).

The bad news is that after some time (like 24h) of stress testing, the
performance degrade, I guess due to some kind of fragmentation. Still,
the performance seems to be maintained to a higher level than the
vanilla kernel.

I suppose that this patch could also help a lot with database (which
often mmap their data) which have to reread huge dataset frequently.


Thanks a lot to Rik and Mel for the provided help, and feel free to mail
me if you have question.


Regards,


Sebastien Wacquiez

PS : the test were conducted with a 4.0.0 kernel.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      parent reply	other threads:[~2015-08-31 16:20 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-25 17:45 [PATCH 0/3] TLB flush multiple pages per IPI v4 Mel Gorman
2015-04-25 17:45 ` [PATCH 1/3] x86, mm: Trace when an IPI is about to be sent Mel Gorman
2015-04-25 17:45 ` [PATCH 2/3] mm: Send one IPI per CPU to TLB flush multiple pages that were recently unmapped Mel Gorman
2015-04-27  2:48   ` Rik van Riel
2015-04-25 17:45 ` [PATCH 3/3] mm: Defer flush of writable TLB entries Mel Gorman
2015-04-27  2:50   ` Rik van Riel
2015-08-31 16:20 ` Sébastien Wacquiez [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55E47EDC.6090205@enix.org \
    --to=sw@enix.org \
    --cc=andi@firstfloor.org \
    --cc=dave.hansen@intel.com \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=minchan@kernel.org \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).