From: Jiang Liu <liuj97@gmail.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Jiang Liu <jiang.liu@huawei.com>, David Rientjes <rientjes@google.com>, Wen Congyang <wency@cn.fujitsu.com>, Mel Gorman <mgorman@suse.de>, Minchan Kim <minchan@kernel.org>, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>, Michal Hocko <mhocko@suse.cz>, James Bottomley <James.Bottomley@HansenPartnership.com>, Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>, David Howells <dhowells@redhat.com>, Mark Salter <msalter@redhat.com>, Jianguo Wu <wujianguo@huawei.com>, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Chris Metcalf <cmetcalf@tilera.com>, Rusty Russell <rusty@rustcorp.com.au>, "Michael S. Tsirkin" <mst@redhat.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Jeremy Fitzhardinge <jeremy@goop.org>, Tang Chen <tangchen@cn.fujitsu.com>, Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>, virtualization@lists.linux-found Subject: [PATCH v5, part3 13/15] mm: correctly update zone->mamaged_pages Date: Wed, 8 May 2013 23:17:12 +0800 [thread overview] Message-ID: <1368026235-5976-14-git-send-email-jiang.liu@huawei.com> (raw) In-Reply-To: <1368026235-5976-1-git-send-email-jiang.liu@huawei.com> Enhance adjust_managed_page_count() to adjust totalhigh_pages for highmem pages. And change code which directly adjusts totalram_pages to use adjust_managed_page_count() because it adjusts totalram_pages, totalhigh_pages and zone->managed_pages altogether in a safe way. Remove inc_totalhigh_pages() and dec_totalhigh_pages() from xen/balloon driver bacause adjust_managed_page_count() has already adjusted totalhigh_pages. This patch also fixes two bugs: 1) enhances virtio_balloon driver to adjust totalhigh_pages when reserve/unreserve pages. 2) enhance memory_hotplug.c to adjust totalhigh_pages when hot-removing memory. We still need to deal with modifications of totalram_pages in file arch/powerpc/platforms/pseries/cmm.c, but need help from PPC experts. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Minchan Kim <minchan@kernel.org> Cc: linux-kernel@vger.kernel.org Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xensource.com Cc: linux-mm@kvack.org --- drivers/virtio/virtio_balloon.c | 8 +++++--- drivers/xen/balloon.c | 23 +++++------------------ mm/hugetlb.c | 2 +- mm/memory_hotplug.c | 15 +++------------ mm/page_alloc.c | 10 +++++----- 5 files changed, 19 insertions(+), 39 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index bd3ae32..6649968 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -148,7 +148,7 @@ static void fill_balloon(struct virtio_balloon *vb, size_t num) } set_page_pfns(vb->pfns + vb->num_pfns, page); vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE; - totalram_pages--; + adjust_managed_page_count(page, -1); } /* Did we get any? */ @@ -160,11 +160,13 @@ static void fill_balloon(struct virtio_balloon *vb, size_t num) static void release_pages_by_pfn(const u32 pfns[], unsigned int num) { unsigned int i; + struct page *page; /* Find pfns pointing at start of each page, get pages and free them. */ for (i = 0; i < num; i += VIRTIO_BALLOON_PAGES_PER_PAGE) { - balloon_page_free(balloon_pfn_to_page(pfns[i])); - totalram_pages++; + page = balloon_pfn_to_page(pfns[i]); + balloon_page_free(page); + adjust_managed_page_count(page, 1); } } diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index d42da3b..a453c05 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -89,14 +89,6 @@ EXPORT_SYMBOL_GPL(balloon_stats); /* We increase/decrease in batches which fit in a page */ static xen_pfn_t frame_list[PAGE_SIZE / sizeof(unsigned long)]; -#ifdef CONFIG_HIGHMEM -#define inc_totalhigh_pages() (totalhigh_pages++) -#define dec_totalhigh_pages() (totalhigh_pages--) -#else -#define inc_totalhigh_pages() do {} while (0) -#define dec_totalhigh_pages() do {} while (0) -#endif - /* List of ballooned pages, threaded through the mem_map array. */ static LIST_HEAD(ballooned_pages); @@ -132,9 +124,7 @@ static void __balloon_append(struct page *page) static void balloon_append(struct page *page) { __balloon_append(page); - if (PageHighMem(page)) - dec_totalhigh_pages(); - totalram_pages--; + adjust_managed_page_count(page, -1); } /* balloon_retrieve: rescue a page from the balloon, if it is not empty. */ @@ -151,13 +141,12 @@ static struct page *balloon_retrieve(bool prefer_highmem) page = list_entry(ballooned_pages.next, struct page, lru); list_del(&page->lru); - if (PageHighMem(page)) { + if (PageHighMem(page)) balloon_stats.balloon_high--; - inc_totalhigh_pages(); - } else + else balloon_stats.balloon_low--; - totalram_pages++; + adjust_managed_page_count(page, 1); return page; } @@ -374,9 +363,7 @@ static enum bp_state increase_reservation(unsigned long nr_pages) #endif /* Relinquish the page back to the allocator. */ - ClearPageReserved(page); - init_page_count(page); - __free_page(page); + __free_reserved_page(page); } balloon_stats.current_pages += rc; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f8feeec..95c5a6b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1246,7 +1246,7 @@ static void __init gather_bootmem_prealloc(void) * side-effects, like CommitLimit going negative. */ if (h->order > (MAX_ORDER - 1)) - totalram_pages += 1 << h->order; + adjust_managed_page_count(page, 1 << h->order); } } diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index c291295..304ada7 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -769,20 +769,13 @@ EXPORT_SYMBOL_GPL(__online_page_set_limits); void __online_page_increment_counters(struct page *page) { - totalram_pages++; - -#ifdef CONFIG_HIGHMEM - if (PageHighMem(page)) - totalhigh_pages++; -#endif + adjust_managed_page_count(page, 1); } EXPORT_SYMBOL_GPL(__online_page_increment_counters); void __online_page_free(struct page *page) { - ClearPageReserved(page); - init_page_count(page); - __free_page(page); + __free_reserved_page(page); } EXPORT_SYMBOL_GPL(__online_page_free); @@ -979,7 +972,6 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ return ret; } - zone->managed_pages += onlined_pages; zone->present_pages += onlined_pages; zone->zone_pgdat->node_present_pages += onlined_pages; if (onlined_pages) { @@ -1563,10 +1555,9 @@ repeat: /* reset pagetype flags and makes migrate type to be MOVABLE */ undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE); /* removal success */ - zone->managed_pages -= offlined_pages; + adjust_managed_page_count(pfn_to_page(start_pfn), -offlined_pages); zone->present_pages -= offlined_pages; zone->zone_pgdat->node_present_pages -= offlined_pages; - totalram_pages -= offlined_pages; init_per_zone_wmark_min(); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 58b4523..eabfbcb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -780,11 +780,7 @@ void __init init_cma_reserved_pageblock(struct page *page) set_page_refcounted(page); set_pageblock_migratetype(page, MIGRATE_CMA); __free_pages(page, pageblock_order); - totalram_pages += pageblock_nr_pages; -#ifdef CONFIG_HIGHMEM - if (PageHighMem(page)) - totalhigh_pages += pageblock_nr_pages; -#endif + adjust_managed_page_count(page, pageblock_nr_pages); } #endif @@ -5187,6 +5183,10 @@ void adjust_managed_page_count(struct page *page, long count) spin_lock(&managed_page_count_lock); page_zone(page)->managed_pages += count; totalram_pages += count; +#ifdef CONFIG_HIGHMEM + if (PageHighMem(page)) + totalhigh_pages += count; +#endif spin_unlock(&managed_page_count_lock); } EXPORT_SYMBOL(adjust_managed_page_count); -- 1.7.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
WARNING: multiple messages have this Message-ID (diff)
From: Jiang Liu <liuj97@gmail.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Jiang Liu <jiang.liu@huawei.com>, David Rientjes <rientjes@google.com>, Wen Congyang <wency@cn.fujitsu.com>, Mel Gorman <mgorman@suse.de>, Minchan Kim <minchan@kernel.org>, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>, Michal Hocko <mhocko@suse.cz>, James Bottomley <James.Bottomley@HansenPartnership.com>, Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>, David Howells <dhowells@redhat.com>, Mark Salter <msalter@redhat.com>, Jianguo Wu <wujianguo@huawei.com>, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Chris Metcalf <cmetcalf@tilera.com>, Rusty Russell <rusty@rustcorp.com.au>, "Michael S. Tsirkin" <mst@redhat.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Jeremy Fitzhardinge <jeremy@goop.org>, Tang Chen <tangchen@cn.fujitsu.com>, Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>, virtualization@lists.linux-foundation.org, xen-devel@lists.xensource.com Subject: [PATCH v5, part3 13/15] mm: correctly update zone->mamaged_pages Date: Wed, 8 May 2013 23:17:12 +0800 [thread overview] Message-ID: <1368026235-5976-14-git-send-email-jiang.liu@huawei.com> (raw) Message-ID: <20130508151712.gSxVhQxjbuvunUVHiZm7_PYT3gUAuzN9uZXzOmUKmAg@z> (raw) In-Reply-To: <1368026235-5976-1-git-send-email-jiang.liu@huawei.com> Enhance adjust_managed_page_count() to adjust totalhigh_pages for highmem pages. And change code which directly adjusts totalram_pages to use adjust_managed_page_count() because it adjusts totalram_pages, totalhigh_pages and zone->managed_pages altogether in a safe way. Remove inc_totalhigh_pages() and dec_totalhigh_pages() from xen/balloon driver bacause adjust_managed_page_count() has already adjusted totalhigh_pages. This patch also fixes two bugs: 1) enhances virtio_balloon driver to adjust totalhigh_pages when reserve/unreserve pages. 2) enhance memory_hotplug.c to adjust totalhigh_pages when hot-removing memory. We still need to deal with modifications of totalram_pages in file arch/powerpc/platforms/pseries/cmm.c, but need help from PPC experts. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Minchan Kim <minchan@kernel.org> Cc: linux-kernel@vger.kernel.org Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xensource.com Cc: linux-mm@kvack.org --- drivers/virtio/virtio_balloon.c | 8 +++++--- drivers/xen/balloon.c | 23 +++++------------------ mm/hugetlb.c | 2 +- mm/memory_hotplug.c | 15 +++------------ mm/page_alloc.c | 10 +++++----- 5 files changed, 19 insertions(+), 39 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index bd3ae32..6649968 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -148,7 +148,7 @@ static void fill_balloon(struct virtio_balloon *vb, size_t num) } set_page_pfns(vb->pfns + vb->num_pfns, page); vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE; - totalram_pages--; + adjust_managed_page_count(page, -1); } /* Did we get any? */ @@ -160,11 +160,13 @@ static void fill_balloon(struct virtio_balloon *vb, size_t num) static void release_pages_by_pfn(const u32 pfns[], unsigned int num) { unsigned int i; + struct page *page; /* Find pfns pointing at start of each page, get pages and free them. */ for (i = 0; i < num; i += VIRTIO_BALLOON_PAGES_PER_PAGE) { - balloon_page_free(balloon_pfn_to_page(pfns[i])); - totalram_pages++; + page = balloon_pfn_to_page(pfns[i]); + balloon_page_free(page); + adjust_managed_page_count(page, 1); } } diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index d42da3b..a453c05 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -89,14 +89,6 @@ EXPORT_SYMBOL_GPL(balloon_stats); /* We increase/decrease in batches which fit in a page */ static xen_pfn_t frame_list[PAGE_SIZE / sizeof(unsigned long)]; -#ifdef CONFIG_HIGHMEM -#define inc_totalhigh_pages() (totalhigh_pages++) -#define dec_totalhigh_pages() (totalhigh_pages--) -#else -#define inc_totalhigh_pages() do {} while (0) -#define dec_totalhigh_pages() do {} while (0) -#endif - /* List of ballooned pages, threaded through the mem_map array. */ static LIST_HEAD(ballooned_pages); @@ -132,9 +124,7 @@ static void __balloon_append(struct page *page) static void balloon_append(struct page *page) { __balloon_append(page); - if (PageHighMem(page)) - dec_totalhigh_pages(); - totalram_pages--; + adjust_managed_page_count(page, -1); } /* balloon_retrieve: rescue a page from the balloon, if it is not empty. */ @@ -151,13 +141,12 @@ static struct page *balloon_retrieve(bool prefer_highmem) page = list_entry(ballooned_pages.next, struct page, lru); list_del(&page->lru); - if (PageHighMem(page)) { + if (PageHighMem(page)) balloon_stats.balloon_high--; - inc_totalhigh_pages(); - } else + else balloon_stats.balloon_low--; - totalram_pages++; + adjust_managed_page_count(page, 1); return page; } @@ -374,9 +363,7 @@ static enum bp_state increase_reservation(unsigned long nr_pages) #endif /* Relinquish the page back to the allocator. */ - ClearPageReserved(page); - init_page_count(page); - __free_page(page); + __free_reserved_page(page); } balloon_stats.current_pages += rc; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f8feeec..95c5a6b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1246,7 +1246,7 @@ static void __init gather_bootmem_prealloc(void) * side-effects, like CommitLimit going negative. */ if (h->order > (MAX_ORDER - 1)) - totalram_pages += 1 << h->order; + adjust_managed_page_count(page, 1 << h->order); } } diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index c291295..304ada7 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -769,20 +769,13 @@ EXPORT_SYMBOL_GPL(__online_page_set_limits); void __online_page_increment_counters(struct page *page) { - totalram_pages++; - -#ifdef CONFIG_HIGHMEM - if (PageHighMem(page)) - totalhigh_pages++; -#endif + adjust_managed_page_count(page, 1); } EXPORT_SYMBOL_GPL(__online_page_increment_counters); void __online_page_free(struct page *page) { - ClearPageReserved(page); - init_page_count(page); - __free_page(page); + __free_reserved_page(page); } EXPORT_SYMBOL_GPL(__online_page_free); @@ -979,7 +972,6 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ return ret; } - zone->managed_pages += onlined_pages; zone->present_pages += onlined_pages; zone->zone_pgdat->node_present_pages += onlined_pages; if (onlined_pages) { @@ -1563,10 +1555,9 @@ repeat: /* reset pagetype flags and makes migrate type to be MOVABLE */ undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE); /* removal success */ - zone->managed_pages -= offlined_pages; + adjust_managed_page_count(pfn_to_page(start_pfn), -offlined_pages); zone->present_pages -= offlined_pages; zone->zone_pgdat->node_present_pages -= offlined_pages; - totalram_pages -= offlined_pages; init_per_zone_wmark_min(); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 58b4523..eabfbcb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -780,11 +780,7 @@ void __init init_cma_reserved_pageblock(struct page *page) set_page_refcounted(page); set_pageblock_migratetype(page, MIGRATE_CMA); __free_pages(page, pageblock_order); - totalram_pages += pageblock_nr_pages; -#ifdef CONFIG_HIGHMEM - if (PageHighMem(page)) - totalhigh_pages += pageblock_nr_pages; -#endif + adjust_managed_page_count(page, pageblock_nr_pages); } #endif @@ -5187,6 +5183,10 @@ void adjust_managed_page_count(struct page *page, long count) spin_lock(&managed_page_count_lock); page_zone(page)->managed_pages += count; totalram_pages += count; +#ifdef CONFIG_HIGHMEM + if (PageHighMem(page)) + totalhigh_pages += count; +#endif spin_unlock(&managed_page_count_lock); } EXPORT_SYMBOL(adjust_managed_page_count); -- 1.7.9.5
next prev parent reply other threads:[~2013-05-08 15:17 UTC|newest] Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top 2013-05-08 15:16 [PATCH v5, part3 00/15] accurately calculate memory statisitic information Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 01/15] mm: fix build warnings caused by free_reserved_area() Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 18:45 ` Russell King - ARM Linux 2013-05-08 15:17 ` [PATCH v5, part3 02/15] mm: enhance free_reserved_area() to support poisoning memory with zero Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 03/15] mm/ARM64: kill poison_init_mem() Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 04/15] mm/x86: use free_reserved_area() to simplify code Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 05/15] mm/tile: use common help functions to free reserved pages Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 06/15] mm, powertv: use free_reserved_area() to simplify code Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 07/15] mm, acornfb: " Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 08/15] mm: fix some trivial typos in comments Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 09/15] mm: use managed_pages to calculate default zonelist order Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 10/15] mm: accurately calculate zone->managed_pages for highmem zones Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 11/15] mm: use a dedicated lock to protect totalram_pages and zone->managed_pages Jiang Liu 2013-05-08 15:17 ` Jiang Liu [not found] ` <518A6EEC.6060102@redhat.com> 2013-05-08 15:50 ` Jiang Liu 2013-05-08 15:50 ` Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 12/15] mm: make __free_pages_bootmem() only available at boot time Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 15:17 ` Jiang Liu [this message] 2013-05-08 15:17 ` [PATCH v5, part3 13/15] mm: correctly update zone->mamaged_pages Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 14/15] mm: concentrate modification of totalram_pages into the mm core Jiang Liu 2013-05-08 15:17 ` Jiang Liu 2013-05-08 15:17 ` [PATCH v5, part3 15/15] mm: report available pages as "MemTotal" for each NUMA node Jiang Liu 2013-05-08 15:17 ` Jiang Liu
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1368026235-5976-14-git-send-email-jiang.liu@huawei.com \ --to=liuj97@gmail.com \ --cc=James.Bottomley@HansenPartnership.com \ --cc=akpm@linux-foundation.org \ --cc=cmetcalf@tilera.com \ --cc=dhowells@redhat.com \ --cc=isimatu.yasuaki@jp.fujitsu.com \ --cc=jeremy@goop.org \ --cc=jiang.liu@huawei.com \ --cc=kamezawa.hiroyu@jp.fujitsu.com \ --cc=konrad.wilk@oracle.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@suse.cz \ --cc=minchan@kernel.org \ --cc=msalter@redhat.com \ --cc=mst@redhat.com \ --cc=rientjes@google.com \ --cc=rusty@rustcorp.com.au \ --cc=sergei.shtylyov@cogentembedded.com \ --cc=tangchen@cn.fujitsu.com \ --cc=virtualization@lists.linux-found \ --cc=wency@cn.fujitsu.com \ --cc=wujianguo@huawei.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).