From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Tariq Toukan <tariqt@mellanox.com>,
Linux Kernel Network Developers <netdev@vger.kernel.org>,
linux-mm <linux-mm@kvack.org>, David Miller <davem@davemloft.net>,
Eric Dumazet <eric.dumazet@gmail.com>,
Alexei Starovoitov <ast@fb.com>,
Saeed Mahameed <saeedm@mellanox.com>,
Eran Ben Elisha <eranbe@mellanox.com>,
Andrew Morton <akpm@linux-foundation.org>,
Michal Hocko <mhocko@suse.com>,
brouer@redhat.com, "Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: Page allocator bottleneck
Date: Thu, 9 Nov 2017 06:21:01 +0100 [thread overview]
Message-ID: <20171109062101.64bde3b6@redhat.com> (raw)
In-Reply-To: <20171108093547.ctsjv4a42xjvfsf7@techsingularity.net>
On Wed, 8 Nov 2017 09:35:47 +0000
Mel Gorman <mgorman@techsingularity.net> wrote:
> On Wed, Nov 08, 2017 at 02:42:04PM +0900, Tariq Toukan wrote:
> > > > Hi all,
> > > >
> > > > After leaving this task for a while doing other tasks, I got back to it now
> > > > and see that the good behavior I observed earlier was not stable.
> > > >
> > > > Recall: I work with a modified driver that allocates a page (4K) per packet
> > > > (MTU=1500), in order to simulate the stress on page-allocator in 200Gbps
> > > > NICs.
> > > >
> > >
> > > There is almost new in the data that hasn't been discussed before. The
> > > suggestion to free on a remote per-cpu list would be expensive as it would
> > > require per-cpu lists to have a lock for safe remote access.
> >
> > That's right, but each such lock will be significantly less congested than
> > the buddy allocator lock.
>
> That is not necessarily true if all the allocations and frees always happen
> on the same CPUs. The contention will be equivalent to the zone lock.
> Your point will only hold true if there are also heavy allocation streams
> from other CPUs that are unrelated.
>
> > In the flow in subject two cores need to
> > synchronize (one allocates, one frees).
> > We also need to evaluate the cost of acquiring and releasing the lock in the
> > case of no congestion at all.
> >
>
> If the per-cpu structures have a lock, there will be a light amount of
> overhead. Nothing too severe, but it shouldn't be done lightly either.
>
> > > However,
> > > I'd be curious if you could test the mm-pagealloc-irqpvec-v1r4 branch
> > > ttps://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git . It's an
> > > unfinished prototype I worked on a few weeks ago. I was going to revisit
> > > in about a months time when 4.15-rc1 was out. I'd be interested in seeing
> > > if it has a postive gain in normal page allocations without destroying
> > > the performance of interrupt and softirq allocation contexts. The
> > > interrupt/softirq context testing is crucial as that is something that
> > > hurt us before when trying to improve page allocator performance.
> > >
> > Yes, I will test that once I get back in office (after netdev conference and
> > vacation).
>
> Thanks.
I'll also commit to testing this (when I return home, as Tariq I'm also
in Seoul ATM).
> > Can you please elaborate in a few words about the idea behind the prototype?
> > Does it address page-allocator scalability issues, or only the rate of
> > single core page allocations?
>
> Short answer -- maybe. All scalability issues or rates of allocation are
> context and workload dependant so the question is impossible to answer
> for the general case.
>
> Broadly speaking, the patch reintroduces the per-cpu lists being for !irq
> context allocations again. The last time we did this, hard and soft IRQ
> allocations went through the buddy allocator which couldn't scale and
> the patch was reverted. With this patch, it goes through a very large
> pagevec-like structure that is protected by a lock but the fast paths
> for alloc/free are extremely simple operations so the lock hold times are
> very small. Potentially, a development path is that the current per-cpu
> allocator is replaced with pagevec-like structures that are dynamically
> allocated which would also allow pages to be freed to remote CPU lists
I've had huge success using ptr_ring, as a queue between CPUs, to
minimize cross-CPU cache-line touching. With the recently accepted BPF
map called "cpumap" used for XDP_REDIRECT.
It's important to handle the two borderline cases in ptr_ring, of the
queue being almost full (default handled in ptr_ring) or almost empty.
Like describe in[1] slide 14:
[1] http://people.netfilter.org/hawk/presentations/NetConf2017_Seoul/XDP_devel_update_NetConf2017_Seoul.pdf
The use of XDP_REDIRECT + cpumap, do expose issues with the page
allocator. E.g. slide 19 show ixgbe recycle scheme failing, but still
hitting the PCP. Also notice slide 22 deducing the overhead. Scale
stressing ptr_ring is showed in extra slides 35-39.
> (if we could detect when that is appropriate which is unclear). We could
> also drain remote lists without using IPIs. The downside is that the memory
> footprint of the allocator would be higher and the size could no longer
> be tuned so there would need to be excellent justification for such a move.
>
> I haven't posted the patches properly yet because mmotm is carrying too
> many patches as it is and this patch indirectly depends on the contents. I
> also didn't write memory hot-remove support which would be a requirement
> before merging. I hadn't intended to put further effort into it until I
> had some evidence the approach had promise. My own testing indicated it
> worked but the drivers I was using for network tests did not allocate
> intensely enough to show any major gain/loss.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-11-09 5:21 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-14 16:49 Page allocator bottleneck Tariq Toukan
2017-09-14 20:19 ` Andi Kleen
2017-09-17 15:43 ` Tariq Toukan
2017-09-15 7:28 ` Jesper Dangaard Brouer
2017-09-17 16:16 ` Tariq Toukan
2017-09-18 7:34 ` Aaron Lu
2017-09-18 7:44 ` Aaron Lu
2017-09-18 15:33 ` Tariq Toukan
2017-09-19 7:23 ` Aaron Lu
2017-09-15 10:23 ` Mel Gorman
2017-09-18 9:16 ` Tariq Toukan
2017-11-02 17:21 ` Tariq Toukan
2017-11-03 13:40 ` Mel Gorman
2017-11-08 5:42 ` Tariq Toukan
2017-11-08 9:35 ` Mel Gorman
2017-11-09 3:51 ` Figo.zhang
2017-11-09 5:06 ` Tariq Toukan
2017-11-09 5:21 ` Jesper Dangaard Brouer [this message]
2018-04-21 8:15 ` Aaron Lu
2018-04-22 16:43 ` Tariq Toukan
2018-04-23 8:54 ` Tariq Toukan
2018-04-23 13:10 ` Aaron Lu
2018-04-27 8:45 ` Aaron Lu
2018-05-02 13:38 ` Tariq Toukan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171109062101.64bde3b6@redhat.com \
--to=brouer@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=ast@fb.com \
--cc=davem@davemloft.net \
--cc=eranbe@mellanox.com \
--cc=eric.dumazet@gmail.com \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
--cc=tariqt@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).