* [PATCH] mm: memcontrol: add file_thp, shmem_thp to memory.stat
@ 2020-10-22 15:18 Johannes Weiner
2020-10-22 16:49 ` Rik van Riel
[not found] ` <20201022151844.489337-1-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
0 siblings, 2 replies; 10+ messages in thread
From: Johannes Weiner @ 2020-10-22 15:18 UTC (permalink / raw)
To: Andrew Morton; +Cc: Michal Hocko, linux-mm, cgroups, linux-kernel, kernel-team
As huge page usage in the page cache and for shmem files proliferates
in our production environment, the performance monitoring team has
asked for per-cgroup stats on those pages.
We already track and export anon_thp per cgroup. We already track file
THP and shmem THP per node, so making them per-cgroup is only a matter
of switching from node to lruvec counters. All callsites are in places
where the pages are charged and locked, so page->memcg is stable.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/filemap.c | 4 ++--
mm/huge_memory.c | 4 ++--
mm/khugepaged.c | 4 ++--
mm/memcontrol.c | 6 +++++-
mm/shmem.c | 2 +-
5 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index e80aa9d2db68..334ce608735c 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -204,9 +204,9 @@ static void unaccount_page_cache_page(struct address_space *mapping,
if (PageSwapBacked(page)) {
__mod_lruvec_page_state(page, NR_SHMEM, -nr);
if (PageTransHuge(page))
- __dec_node_page_state(page, NR_SHMEM_THPS);
+ __dec_lruvec_page_state(page, NR_SHMEM_THPS);
} else if (PageTransHuge(page)) {
- __dec_node_page_state(page, NR_FILE_THPS);
+ __dec_lruvec_page_state(page, NR_FILE_THPS);
filemap_nr_thps_dec(mapping);
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index cba3812a5c3e..5fe044e5dad5 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2707,9 +2707,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
spin_unlock(&ds_queue->split_queue_lock);
if (mapping) {
if (PageSwapBacked(head))
- __dec_node_page_state(head, NR_SHMEM_THPS);
+ __dec_lruvec_page_state(head, NR_SHMEM_THPS);
else
- __dec_node_page_state(head, NR_FILE_THPS);
+ __dec_lruvec_page_state(head, NR_FILE_THPS);
}
__split_huge_page(page, list, end, flags);
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index f1d5f6dde47c..04828e21f434 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1833,9 +1833,9 @@ static void collapse_file(struct mm_struct *mm,
}
if (is_shmem)
- __inc_node_page_state(new_page, NR_SHMEM_THPS);
+ __inc_lruvec_page_state(new_page, NR_SHMEM_THPS);
else {
- __inc_node_page_state(new_page, NR_FILE_THPS);
+ __inc_lruvec_page_state(new_page, NR_FILE_THPS);
filemap_nr_thps_inc(mapping);
}
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 2636f8bad908..98177d5e8e03 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1507,6 +1507,8 @@ static struct memory_stat memory_stats[] = {
* constant(e.g. powerpc).
*/
{ "anon_thp", 0, NR_ANON_THPS },
+ { "file_thp", 0, NR_FILE_THPS },
+ { "shmem_thp", 0, NR_SHMEM_THPS },
#endif
{ "inactive_anon", PAGE_SIZE, NR_INACTIVE_ANON },
{ "active_anon", PAGE_SIZE, NR_ACTIVE_ANON },
@@ -1537,7 +1539,9 @@ static int __init memory_stats_init(void)
for (i = 0; i < ARRAY_SIZE(memory_stats); i++) {
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- if (memory_stats[i].idx == NR_ANON_THPS)
+ if (memory_stats[i].idx == NR_ANON_THPS ||
+ memory_stats[i].idx == NR_FILE_THPS ||
+ memory_stats[i].idx == NR_SHMEM_THPS)
memory_stats[i].ratio = HPAGE_PMD_SIZE;
#endif
VM_BUG_ON(!memory_stats[i].ratio);
diff --git a/mm/shmem.c b/mm/shmem.c
index 537c137698f8..5009d783d954 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -713,7 +713,7 @@ static int shmem_add_to_page_cache(struct page *page,
}
if (PageTransHuge(page)) {
count_vm_event(THP_FILE_ALLOC);
- __inc_node_page_state(page, NR_SHMEM_THPS);
+ __inc_lruvec_page_state(page, NR_SHMEM_THPS);
}
mapping->nrpages += nr;
__mod_lruvec_page_state(page, NR_FILE_PAGES, nr);
--
2.29.0
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH] mm: memcontrol: add file_thp, shmem_thp to memory.stat
2020-10-22 15:18 [PATCH] mm: memcontrol: add file_thp, shmem_thp to memory.stat Johannes Weiner
@ 2020-10-22 16:49 ` Rik van Riel
2020-10-22 16:57 ` Rik van Riel
[not found] ` <20201022151844.489337-1-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
1 sibling, 1 reply; 10+ messages in thread
From: Rik van Riel @ 2020-10-22 16:49 UTC (permalink / raw)
To: Johannes Weiner, Andrew Morton
Cc: Michal Hocko, linux-mm, cgroups, linux-kernel, kernel-team
On Thu, 2020-10-22 at 11:18 -0400, Johannes Weiner wrote:
> index e80aa9d2db68..334ce608735c 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -204,9 +204,9 @@ static void unaccount_page_cache_page(struct
> address_space *mapping,
> if (PageSwapBacked(page)) {
> __mod_lruvec_page_state(page, NR_SHMEM, -nr);
> if (PageTransHuge(page))
> - __dec_node_page_state(page, NR_SHMEM_THPS);
> + __dec_lruvec_page_state(page, NR_SHMEM_THPS);
> } else if (PageTransHuge(page)) {
> - __dec_node_page_state(page, NR_FILE_THPS);
> + __dec_lruvec_page_state(page, NR_FILE_THPS);
> filemap_nr_thps_dec(mapping);
> }
This may be a dumb question, but does that mean the
NR_FILE_THPS number will no longer be visible in
/proc/vmstat or is there some magic I overlooked in
a cursory look of the code?
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH] mm: memcontrol: add file_thp, shmem_thp to memory.stat
2020-10-22 16:49 ` Rik van Riel
@ 2020-10-22 16:57 ` Rik van Riel
2020-10-22 18:29 ` Johannes Weiner
0 siblings, 1 reply; 10+ messages in thread
From: Rik van Riel @ 2020-10-22 16:57 UTC (permalink / raw)
To: Johannes Weiner, Andrew Morton
Cc: Michal Hocko, linux-mm, cgroups, linux-kernel, kernel-team
[-- Attachment #1: Type: text/plain, Size: 1082 bytes --]
On Thu, 2020-10-22 at 12:49 -0400, Rik van Riel wrote:
> On Thu, 2020-10-22 at 11:18 -0400, Johannes Weiner wrote:
>
> > index e80aa9d2db68..334ce608735c 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -204,9 +204,9 @@ static void unaccount_page_cache_page(struct
> > address_space *mapping,
> > if (PageSwapBacked(page)) {
> > __mod_lruvec_page_state(page, NR_SHMEM, -nr);
> > if (PageTransHuge(page))
> > - __dec_node_page_state(page, NR_SHMEM_THPS);
> > + __dec_lruvec_page_state(page, NR_SHMEM_THPS);
> > } else if (PageTransHuge(page)) {
> > - __dec_node_page_state(page, NR_FILE_THPS);
> > + __dec_lruvec_page_state(page, NR_FILE_THPS);
> > filemap_nr_thps_dec(mapping);
> > }
>
> This may be a dumb question, but does that mean the
> NR_FILE_THPS number will no longer be visible in
> /proc/vmstat or is there some magic I overlooked in
> a cursory look of the code?
Never mind, I found it a few levels deep in
__dec_lruvec_page_state.
Reviewed-by: Rik van Riel <riel@surriel.com>
--
All Rights Reversed.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH] mm: memcontrol: add file_thp, shmem_thp to memory.stat
2020-10-22 16:57 ` Rik van Riel
@ 2020-10-22 18:29 ` Johannes Weiner
0 siblings, 0 replies; 10+ messages in thread
From: Johannes Weiner @ 2020-10-22 18:29 UTC (permalink / raw)
To: Rik van Riel
Cc: Andrew Morton, Michal Hocko, linux-mm, cgroups, linux-kernel,
kernel-team
On Thu, Oct 22, 2020 at 12:57:55PM -0400, Rik van Riel wrote:
> On Thu, 2020-10-22 at 12:49 -0400, Rik van Riel wrote:
> > On Thu, 2020-10-22 at 11:18 -0400, Johannes Weiner wrote:
> >
> > > index e80aa9d2db68..334ce608735c 100644
> > > --- a/mm/filemap.c
> > > +++ b/mm/filemap.c
> > > @@ -204,9 +204,9 @@ static void unaccount_page_cache_page(struct
> > > address_space *mapping,
> > > if (PageSwapBacked(page)) {
> > > __mod_lruvec_page_state(page, NR_SHMEM, -nr);
> > > if (PageTransHuge(page))
> > > - __dec_node_page_state(page, NR_SHMEM_THPS);
> > > + __dec_lruvec_page_state(page, NR_SHMEM_THPS);
> > > } else if (PageTransHuge(page)) {
> > > - __dec_node_page_state(page, NR_FILE_THPS);
> > > + __dec_lruvec_page_state(page, NR_FILE_THPS);
> > > filemap_nr_thps_dec(mapping);
> > > }
> >
> > This may be a dumb question, but does that mean the
> > NR_FILE_THPS number will no longer be visible in
> > /proc/vmstat or is there some magic I overlooked in
> > a cursory look of the code?
>
> Never mind, I found it a few levels deep in
> __dec_lruvec_page_state.
No worries, it's a legit question.
lruvec is at the intersection of node and memcg, so I'm just moving
the accounting to a higher-granularity function that updates all
layers, including the node.
> Reviewed-by: Rik van Riel <riel@surriel.com>
Thanks!
^ permalink raw reply [flat|nested] 10+ messages in thread
[parent not found: <20201022151844.489337-1-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>]
* Re: [PATCH] mm: memcontrol: add file_thp, shmem_thp to memory.stat
[not found] ` <20201022151844.489337-1-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
@ 2020-10-22 16:51 ` Shakeel Butt
2020-10-22 18:00 ` David Rientjes
` (3 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Shakeel Butt @ 2020-10-22 16:51 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, Michal Hocko, Linux MM, Cgroups, LKML, Kernel Team
On Thu, Oct 22, 2020 at 8:20 AM Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
>
> As huge page usage in the page cache and for shmem files proliferates
> in our production environment, the performance monitoring team has
> asked for per-cgroup stats on those pages.
>
> We already track and export anon_thp per cgroup. We already track file
> THP and shmem THP per node, so making them per-cgroup is only a matter
> of switching from node to lruvec counters. All callsites are in places
> where the pages are charged and locked, so page->memcg is stable.
>
> Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Reviewed-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH] mm: memcontrol: add file_thp, shmem_thp to memory.stat
[not found] ` <20201022151844.489337-1-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2020-10-22 16:51 ` Shakeel Butt
@ 2020-10-22 18:00 ` David Rientjes
2020-10-23 7:42 ` Michal Hocko
` (2 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: David Rientjes @ 2020-10-22 18:00 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, Michal Hocko, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg
On Thu, 22 Oct 2020, Johannes Weiner wrote:
> As huge page usage in the page cache and for shmem files proliferates
> in our production environment, the performance monitoring team has
> asked for per-cgroup stats on those pages.
>
> We already track and export anon_thp per cgroup. We already track file
> THP and shmem THP per node, so making them per-cgroup is only a matter
> of switching from node to lruvec counters. All callsites are in places
> where the pages are charged and locked, so page->memcg is stable.
>
> Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Acked-by: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Nice!
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH] mm: memcontrol: add file_thp, shmem_thp to memory.stat
[not found] ` <20201022151844.489337-1-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2020-10-22 16:51 ` Shakeel Butt
2020-10-22 18:00 ` David Rientjes
@ 2020-10-23 7:42 ` Michal Hocko
2020-10-25 18:37 ` Andrew Morton
2020-10-26 20:24 ` Song Liu
4 siblings, 0 replies; 10+ messages in thread
From: Michal Hocko @ 2020-10-23 7:42 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg
On Thu 22-10-20 11:18:44, Johannes Weiner wrote:
> As huge page usage in the page cache and for shmem files proliferates
> in our production environment, the performance monitoring team has
> asked for per-cgroup stats on those pages.
>
> We already track and export anon_thp per cgroup. We already track file
> THP and shmem THP per node, so making them per-cgroup is only a matter
> of switching from node to lruvec counters. All callsites are in places
> where the pages are charged and locked, so page->memcg is stable.
>
> Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Acked-by: Michal Hocko <mhocko-IBi9RG/b67k@public.gmane.org>
> ---
> mm/filemap.c | 4 ++--
> mm/huge_memory.c | 4 ++--
> mm/khugepaged.c | 4 ++--
> mm/memcontrol.c | 6 +++++-
> mm/shmem.c | 2 +-
> 5 files changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index e80aa9d2db68..334ce608735c 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -204,9 +204,9 @@ static void unaccount_page_cache_page(struct address_space *mapping,
> if (PageSwapBacked(page)) {
> __mod_lruvec_page_state(page, NR_SHMEM, -nr);
> if (PageTransHuge(page))
> - __dec_node_page_state(page, NR_SHMEM_THPS);
> + __dec_lruvec_page_state(page, NR_SHMEM_THPS);
> } else if (PageTransHuge(page)) {
> - __dec_node_page_state(page, NR_FILE_THPS);
> + __dec_lruvec_page_state(page, NR_FILE_THPS);
> filemap_nr_thps_dec(mapping);
> }
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index cba3812a5c3e..5fe044e5dad5 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2707,9 +2707,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> spin_unlock(&ds_queue->split_queue_lock);
> if (mapping) {
> if (PageSwapBacked(head))
> - __dec_node_page_state(head, NR_SHMEM_THPS);
> + __dec_lruvec_page_state(head, NR_SHMEM_THPS);
> else
> - __dec_node_page_state(head, NR_FILE_THPS);
> + __dec_lruvec_page_state(head, NR_FILE_THPS);
> }
>
> __split_huge_page(page, list, end, flags);
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index f1d5f6dde47c..04828e21f434 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1833,9 +1833,9 @@ static void collapse_file(struct mm_struct *mm,
> }
>
> if (is_shmem)
> - __inc_node_page_state(new_page, NR_SHMEM_THPS);
> + __inc_lruvec_page_state(new_page, NR_SHMEM_THPS);
> else {
> - __inc_node_page_state(new_page, NR_FILE_THPS);
> + __inc_lruvec_page_state(new_page, NR_FILE_THPS);
> filemap_nr_thps_inc(mapping);
> }
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 2636f8bad908..98177d5e8e03 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1507,6 +1507,8 @@ static struct memory_stat memory_stats[] = {
> * constant(e.g. powerpc).
> */
> { "anon_thp", 0, NR_ANON_THPS },
> + { "file_thp", 0, NR_FILE_THPS },
> + { "shmem_thp", 0, NR_SHMEM_THPS },
> #endif
> { "inactive_anon", PAGE_SIZE, NR_INACTIVE_ANON },
> { "active_anon", PAGE_SIZE, NR_ACTIVE_ANON },
> @@ -1537,7 +1539,9 @@ static int __init memory_stats_init(void)
>
> for (i = 0; i < ARRAY_SIZE(memory_stats); i++) {
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> - if (memory_stats[i].idx == NR_ANON_THPS)
> + if (memory_stats[i].idx == NR_ANON_THPS ||
> + memory_stats[i].idx == NR_FILE_THPS ||
> + memory_stats[i].idx == NR_SHMEM_THPS)
> memory_stats[i].ratio = HPAGE_PMD_SIZE;
> #endif
> VM_BUG_ON(!memory_stats[i].ratio);
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 537c137698f8..5009d783d954 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -713,7 +713,7 @@ static int shmem_add_to_page_cache(struct page *page,
> }
> if (PageTransHuge(page)) {
> count_vm_event(THP_FILE_ALLOC);
> - __inc_node_page_state(page, NR_SHMEM_THPS);
> + __inc_lruvec_page_state(page, NR_SHMEM_THPS);
> }
> mapping->nrpages += nr;
> __mod_lruvec_page_state(page, NR_FILE_PAGES, nr);
> --
> 2.29.0
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH] mm: memcontrol: add file_thp, shmem_thp to memory.stat
[not found] ` <20201022151844.489337-1-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
` (2 preceding siblings ...)
2020-10-23 7:42 ` Michal Hocko
@ 2020-10-25 18:37 ` Andrew Morton
[not found] ` <20201025113725.b60f2b08d7331a31edf009a1-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2020-10-26 20:24 ` Song Liu
4 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2020-10-25 18:37 UTC (permalink / raw)
To: Johannes Weiner
Cc: Michal Hocko, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg
On Thu, 22 Oct 2020 11:18:44 -0400 Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
> As huge page usage in the page cache and for shmem files proliferates
> in our production environment, the performance monitoring team has
> asked for per-cgroup stats on those pages.
>
> We already track and export anon_thp per cgroup. We already track file
> THP and shmem THP per node, so making them per-cgroup is only a matter
> of switching from node to lruvec counters. All callsites are in places
> where the pages are charged and locked, so page->memcg is stable.
>
> ...
>
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1507,6 +1507,8 @@ static struct memory_stat memory_stats[] = {
> * constant(e.g. powerpc).
> */
> { "anon_thp", 0, NR_ANON_THPS },
> + { "file_thp", 0, NR_FILE_THPS },
> + { "shmem_thp", 0, NR_SHMEM_THPS },
Documentation/admin-guide/cgroup-v2.rst is owed an update?
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH] mm: memcontrol: add file_thp, shmem_thp to memory.stat
[not found] ` <20201022151844.489337-1-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
` (3 preceding siblings ...)
2020-10-25 18:37 ` Andrew Morton
@ 2020-10-26 20:24 ` Song Liu
4 siblings, 0 replies; 10+ messages in thread
From: Song Liu @ 2020-10-26 20:24 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, Michal Hocko,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Kernel Team
> On Oct 22, 2020, at 8:18 AM, Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
>
> As huge page usage in the page cache and for shmem files proliferates
> in our production environment, the performance monitoring team has
> asked for per-cgroup stats on those pages.
>
> We already track and export anon_thp per cgroup. We already track file
> THP and shmem THP per node, so making them per-cgroup is only a matter
> of switching from node to lruvec counters. All callsites are in places
> where the pages are charged and locked, so page->memcg is stable.
>
> Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Acked-by: Song Liu <songliubraving-b10kYP2dOMg@public.gmane.org>
Thanks!
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2020-10-26 20:24 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-10-22 15:18 [PATCH] mm: memcontrol: add file_thp, shmem_thp to memory.stat Johannes Weiner
2020-10-22 16:49 ` Rik van Riel
2020-10-22 16:57 ` Rik van Riel
2020-10-22 18:29 ` Johannes Weiner
[not found] ` <20201022151844.489337-1-hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2020-10-22 16:51 ` Shakeel Butt
2020-10-22 18:00 ` David Rientjes
2020-10-23 7:42 ` Michal Hocko
2020-10-25 18:37 ` Andrew Morton
[not found] ` <20201025113725.b60f2b08d7331a31edf009a1-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2020-10-26 17:40 ` Johannes Weiner
2020-10-26 20:24 ` Song Liu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox